• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

ucla-vision/entropy-sgd: Lua implementation of Entropy-SGD

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

ucla-vision/entropy-sgd

开源软件地址(OpenSource Url):

https://github.com/ucla-vision/entropy-sgd

开源编程语言(OpenSource Language):

Python 62.8%

开源软件介绍(OpenSource Introduction):

Entropy-SGD: Biasing Gradient Descent Into Wide Valleys

This is the implementation for Entropy-SGD: Biasing Gradient Descent Into Wide Valleys which will be presented at ICLR '17. It contains a Lua implementation which was used for the experiments in the paper as well as an identical implementation in PyTorch in the python folder.


Instructions for Lua

  1. You will need Torch installed with CUDNN. We have set up the code for training on MNIST and CIFAR-10 datasets. For the former, we use the mnist package which can be installed by luarocks install mnist. To train on CIFAR-10, download the dataset from here. The script process_cifar.py performs the standard ZCA whitening of the dataset which was used for our experiments.

  2. Please run th train.lua -h to check out command line options. The code is set up for using vanilla SGD on LeNet and should converge to an error of about 0.5% after 100 epochs. To run the same network using Entropy-SGD, you can execute

    th train.lua -m mnistconv -L 20 --gamma 1e-4 --scoping 1e-3 --noise 1e-4
    

The parameters perform the following functions:

  • L is the number of Langevin updates in the algorithm, this controls the exploration and is usually set to 20 in our experiments;
  • gamma is also called "scope" in the paper and controls the forcing term that prevents the Langevin updates from exploring too far;
  • scoping is a technique that progressively increases the value of gamma during the course of training, this modulates gamma as a function of time: gamma (1 + scoping)^t where t is the number of parameter updates;
  • noise is the temperature term in Langevin dynamics.
  1. Note that running the code with -L 0 argument (default) will use vanilla SGD with Nesterov's momentum. We also collect some run-time statistics of Entropy-SGD such as the gradient norms, direction of the local entropy gradient vs. original stochastic gradient etc. You can see these using the -v / --verbose option.

  2. For CIFAR-10, run

    th train.lua -m cifarconv --L2 1e-3
    th train.lua -m cifarconv -L 20 --gamma 0.03 --L2 1e-3
    

for SGD or Entropy-SGD respectively.


Instructions for PyTorch

The code for this is inside the python folder. You will need the Python packages torch and torchvision installed from pytorch.org.

  1. The MNIST example downloads and processes the dataset the first time it is run. The files will be stored in the proc folder (same as CIFAR-10 in the Lua version)

  2. Run python train.py -h to check out the command line arguments. The default is to run SGD with Nesterov's momentum on LeNet. You can run Entropy-SGD with

    python train.py -m mnistconv -L 20 --gamma 1e-4 --scoping 1e-3 --noise 1e-4
    

Everything else is identical to the Lua version.


Computing the Hessian

The code in hessian.py computes the Hessian for a small convolutional neural network using SGD and Autograd. Please note that this takes a lot of time, a day or so, and you need to be careful of the memory usage. The experiments in the paper were run on EC2 with 256 GB RAM. Note that this code uses the MNIST dataset downloaded when you run the PyTorch step above.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap