先介绍一下python3下 import numpy的用法:
```
import numpy as np
A=np.array([[1,0,0],[0,1,0],[1,1,0] ])
b=np.array([2,2,1])
print(np.dot(A.T,b))
C=np.array([[1,0,0],[0,1,0],[2,5,9] ])
print(np.dot(C.T,b) )
A=np.array([[1,0,0],[0,1,0],[20,50,90]])
print(np.dot(A.T,b))
```
上面代码的结果是:
```
[3 3 0]
[4 7 9]
[22 52 90]
```
# 从零实现一个神经网络
```
import numpy as nump0y001
def sigmoid(x):
# Our activation function: f(x) = 1 / (1 + e^(-x))
return 1 / (1 + nump0y001.exp(-x))
class Neuron:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
def feedforward(self, inputs):
# Weight inputs, add bias, then use the activation function
total = nump0y001.dot(self.weights, inputs) + self.bias
return sigmoid(total)
weights = nump0y001.array([0, 1]) # w1 = 0, w2 = 1
bias = 4 # b = 4
n02 = Neuron(weights, bias)
x = nump0y001.array([2, 3]) # x1 = 2, x2 = 3
print(n02.feedforward(x)) # 0.9990889488055994
```
- BP神经网络到c++实现等--机器学习“掐死教程”
- 训练bp(神经)网络学会“乘法”--用”蚊子“训练高射炮
- Ann计算异或&前馈神经网络20200302
- 神经网络ANN的表示20200312
- 简单神经网络的后向传播(Backpropagration, BP)算法
- 牛顿迭代法求局部最优(解)20200310
- ubuntu安装numpy和pip3等
- 从零实现一个神经网络-numpy篇01
- _美国普林斯顿大学VictorZhou神经网络神文的改进和翻译20200311
- c语言-普林斯顿victorZhou神经网络实现210301
- bp网络实现xor异或的C语言实现202102
- bp网络实现xor异或-自动录入输入(写死20210202
- Mnist在python3.6上跑tensorFlow2.0一步一坑20210210
- numpy手写数字识别-直接用bp网络识别210201