mean squared errors
Minimize MSE
loss = tf.reduce_mean(tf.square(y - y_data))
GradientDescentOptimizer
https://en.wikipedia.org/wiki/Gradient_descent
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
Logarithmic Loss
https://www.kaggle.com/wiki/LogarithmicLoss
import scipy as sp
def logloss(act, pred):
epsilon = 1e-15
pred = sp.maximum(epsilon, pred)
pred = sp.minimum(1-epsilon, pred)
ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
ll = ll * -1.0/len(act)
return ll
matrix product
import tensorflow as tf
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
#auto close session
with tf.Session() as sess:
result = sess.run([product])
print(result)
# ==> [[ 12.]]
Placeholders
a value that we'll input when we ask TensorFlow to run a computation.
creating nodes for the input images and target output classes.
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
Variables
Variable. A Variable is a value that lives in TensorFlow's computation graph. It can be used and even modified by the computation.
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
regression model.
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
cross entropy
Why You Should Use Cross-Entropy Error Instead Of Classification Error Or Mean Squared Error For Neural Network Classifier Training
-tf.reduce_sum(y_ * tf.log(y)
neural networks
Rectifier ReLU
https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
conv 2d
http://stackoverflow.com/questions/34619177/what-does-tf-nn-conv2d-do-in-tensorflow
沒有留言:
張貼留言