So, adding what are essentially unnecessary functios to PyTorch can have a minor upside. But if swish() had been in PyTorch I would have discovered it earlier. Adding such a trivial function just bloats a large library even further. The fact that PyTorch doesn’t have a built-in swish() function is interesting. Update: I just discovered that PyTorch 1.7 does have a built-in swish() function. Z = self.oupt(z) # no softmax for multi-class # z = T.tanh(self.hid1(x)) # replace tanh() w/ swish() However, it’s trivial to implement inside a PyTorch neural network class, for example: At the time I’m writing this bog post, Keras and TensorFlow have a built-in swish() function (released about 10 weeks ago), but the PyTorch library does not have a swish() function. The Wikipedia entry on swish() points out that swish() is sometimes called sil() or silu() which stands for sigmoid-weighted linear unit. The three related activation functions are: It’s sort of a cross between logistic sigmoid() and relu(). I made this graph of sigmoid(), swish(), and relu() using Excel. The swish() function was devised in 2017. Many variations of relu() followed but none were consistently better so relu() has been used as a de facto default since about 2015. Then relu() was found to work better for deep neural networks. In the early days of NNs, logistic sigmoid() was the most common activation function. I don’t know Thorsten personally, but he seems like a very bright and creative guy. My thanks to fellow ML enthusiast Thorsten Kleppe for pointing swish() out to me when he mentioned the similarity between swish() and gelu() in a Comment to an earlier post. I was recently alerted to the new swish() activation function for neural networks. It’s very difficult, but fun, to keep up with all the new ideas in machine learning.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |