맛동산

3층 신경망 구현 본문

파이썬/딥러닝 구현

3층 신경망 구현

오지고지리고알파고포켓몬고 2017. 5. 19. 16:38

에서 0->1층으로의 전달

 

 

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
·¬
import·numpy·as·np¬
¬
def·sigmoid(x):¬
····return·1·/·(1·+·np.exp(-x))·#·numpy·¬
¬
X·=·np.array([1.0,·0.5])¬
W1·=·np.array([[0.1,·0.3,·0.5],·[0.2,·0.4,·0.6]])¬
B1·=·np.array([0.1,·0.2,·0.3])¬
¬
A1·=·np.dot(X,·W1)·+·B1¬
print(A1)¬
Z1·=·sigmoid(A1)¬
print(Z1)¬
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

0 - 1층. A

활성화 함수로 sigmoid를 사용(http://tastydarr.tistory.com/137)

 

 

절차에 따른 구현

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
·¬
import·numpy·as·np¬
¬
def·sigmoid(x):¬
····return·1·/·(1·+·np.exp(-x))·#·numpy·¬
¬
def·identiry_function(x):¬
····#·········¬
····#·····,·2···,······¬
····return·x¬
¬
#·0·-·1·Layer¬
X·=·np.array([1.0,·0.5])¬
W1·=·np.array([[0.1,·0.3,·0.5],·[0.2,·0.4,·0.6]])¬
B1·=·np.array([0.1,·0.2,·0.3])¬
¬
A1·=·np.dot(X,·W1)·+·B1¬
print(A1)¬
Z1·=·sigmoid(A1)¬
print(Z1)¬
¬
¬
#·1·-·2·Layer¬
W2·=·np.array([[0.1,·0.4],·[0.2,·0.5],·[0.3,·0.6]])¬
B2·=·np.array([0.1,·0.2])¬
¬
A2·=·np.dot(Z1,·W2)·+·B2¬
print(A2)¬
Z2·=·sigmoid(A2)¬
print(Z2)¬
¬
¬
#·2·-·Out·Layer¬
W3·=·np.array([[0.1,·0.3],·[0.2,·0.4]])¬
B3·=·np.array([0.1,·0.2])¬
¬
A3·=·np.dot(Z2,·W3)·+·B3¬
print(A3)¬
Y·=·identiry_function(A3)¬
print(Y)¬
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

 

 

조금 더 프로그램화.

제대로 하려면 init_network에서 weight와 bias를 tensorflow에서 회귀모델 만들때처럼 랜덤으로 주고 그라디언트로 값 찾게 하는거랑 같은 원리겠지

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import·numpy·as·np¬
¬
def·sigmoid(x):¬
····return·1·/·(1·+·np.exp(-x))·#·numpy·¬
¬
def·identity_function(x):¬
····#·········¬
····#·····,·2···,······¬
····return·x¬
¬
def·init_network():¬
····W1·=·np.array([[0.1,·0.3,·0.5],·[0.2,·0.4,·0.6]])¬
····B1·=·np.array([0.1,·0.2,·0.3])¬
····W2·=·np.array([[0.1,·0.4],·[0.2,·0.5],·[0.3,·0.6]])¬
····B2·=·np.array([0.1,·0.2])¬
····W3·=·np.array([[0.1,·0.3],·[0.2,·0.4]])¬
····B3·=·np.array([0.1,·0.2])¬
····network·=·[·[W1,B1],[W2,B2],[W3,B3]·]¬
····return·network¬
¬
def·forward(network,·x,·out):¬
····for·idx·in·range(len(network)):¬
········if·idx==0:¬
············a·=·np.dot(x,·network[idx][0])·+·network[idx][1]·#·x·¬
············out.append(sigmoid(a))¬
············continue¬
········if·idx==len(network)-1:¬
············a·=·np.dot(out[-1],·network[idx][0])·+·network[idx][1]¬
············out.append(identity_function(a))·#·identity·function·¬
············return·out[-1]·#·list·····¬
········a·=·np.dot(out[-1],·network[1][0])·+·network[1][1]¬
········out.append(sigmoid(a))¬
¬
network·=·init_network()¬
out·=·[]¬
¬
x·=·np.array([1.0,·0.5])¬
y·=·forward(network,x,out)¬
print(y)¬
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

'파이썬 > 딥러닝 구현' 카테고리의 다른 글

Softmax 함수 구현  (0) 2017.05.22
note1  (0) 2017.05.19
[여담] word2vec, seq2seq, tensorflow문서 한글번역본  (0) 2017.05.19
재현신경망?  (0) 2017.05.19
신경망의 내적은 행렬의 내적과 같다  (0) 2017.05.19
Comments