Skip to content
nouiz edited this page Feb 7, 2013 · 31 revisions

Updates in the Trunk since the last release up to

git log -p rel-0.5...

git log -p rel-0.5... |grep Merge|less

PR merged since 0.6rc2: git log -p rel-0.6rc2... |grep Merge|grep '#' |cut -f 8 -d ' ' | replace "#" "* https://github.com/Theano/Theano/pull/"

tensor_var.{dot,std,argmin,argmax,argsort,clip,conj,repeat,round,trace,real,imag} to support the NumPy syntax. abalkin * https://github.com/Theano/Theano/pull/1184 In the grad method, if it were asked to raise an error if there is no path between the variables, we didn't always returned an error. We returned the mathematical right answer 0. Ian * https://github.com/Theano/Theano/pull/1091 Rami Al-Rfou' Vivek Kulkarni speed up sparse.AddSD new op sparse.ConstructSparseFromList advanced_subtensor1( TODO, how to specify the sparse_grad? * https://github.com/Theano/Theano/pull/1180 more detection of infinit loop with global opt Pascal L. * https://github.com/Theano/Theano/pull/986 new feature:

tensor.tensordot now support Rop/Lop (Jeremiah Lowin)
This remove the class TensorDot and TensorDotGrad. It is the Dot/Elemwise op that are used.
tensor.dot support n-dimensional inputs as NumPy (Jeremiah Lowin)
Work on the GPU too.

Scan opt that where skipped with a warning is now applied. Fix grad that was not computable, are now computable Razvan. * https://github.com/Theano/Theano/pull/1176 fix race condition when determining if g++ is available: Abalkin * https://github.com/Theano/Theano/pull/1173 Fred

Crash fix tensor.roll(Var, iscalar) reported by Jeremiah Lowin. Memory leak fix on the GPU when allow_gc=False reported by Jonas Gehring

fix GpuSoftmax and GpuSoftmaxWithBias crash on GTX285 Fred accept -ftz=true, --prec-div=false and --prec=sqrt=false opt to nvcc with nvcc.flags. Enable all of them with nvcc.flags=--use_fast_math * https://github.com/Theano/Theano/pull/1162 fix compilation crash with llvm on mac Abalkin. * https://github.com/Theano/Theano/pull/1161 get_constant_value -> get_scalar_constant_value and raise an tensor.basic.NotScalarConstantError. error to be more specific. Ian G. * https://github.com/Theano/Theano/pull/1143 make GpuSum work with bigger shape when summing on the first dim on 3d tensor. Fred, reported Chris Currivan * https://github.com/Theano/Theano/pull/1157 fix crash due to a race condition when importing theano (Ian G. * https://github.com/Theano/Theano/pull/1156 windows fix Fred * https://github.com/Theano/Theano/pull/1160 theano function now always have a field name, default to None. Fred * https://github.com/Theano/Theano/pull/1152 Better profiling of test time with theano-nose --time-profile * https://github.com/Theano/Theano/pull/1150 Fix for new blas interface in scipy Olivier D. * https://github.com/Theano/Theano/pull/1142 Raise an error when theano.shared called with a theano variable * https://github.com/Theano/Theano/pull/1054 more scan opt Razvan P. TODO: what it is? * https://github.com/Theano/Theano/pull/1140 Fix openmp detection Pascal L. * https://github.com/Theano/Theano/pull/1139 add tensor_var.sort() Jeremiah Lowin * https://github.com/Theano/Theano/pull/1130 Fix copy or random state between graph Guillaume D. * https://github.com/Theano/Theano/pull/1127 make tensor.take to support NumPy syntax (abalkin) * https://github.com/Theano/Theano/pull/1120 make the ScanSaveMem applied more frequently. Razvan, reported Abalkin There was a skipped warning printed. * https://github.com/Theano/Theano/pull/1116 make CrossentropySoftmax1HotWithBiasDx and CrossentropySoftmaxArgmax1HotWithBias support uint* dtype * https://github.com/Theano/Theano/pull/1125 Fix problem with the broadcast dimensions of the Repeat op. Abalkin * https://github.com/Theano/Theano/pull/1107 more determinism with including a new class OrderedSet Ian, Olivier D., Pascal L. * https://github.com/Theano/Theano/pull/1112 fix eigh grad that didn't always returned the right dtype. Fred, Olivier D. * https://github.com/Theano/Theano/pull/1104 crash fix at compilation Olivier D. * https://github.com/Theano/Theano/pull/1099 Fred fix wrong dtype in sandbox.linalg.ExtractDiag with shape of 0. reported by abalkin

Fixes three non-determinism problems: 1) forbids using dict as the updates argument to theano.compile.function, since this makes the returned function non-deterministic 2) fixes an issue where grad is non-deterministic 3) the Updates class was not appropriate for representing updates because it is non-deterministic; replaced it with the OrderedUpdates class. This requires changing scan to use the new class.

Also adds some features I found useful for debugging these issues.

Trying to use the Updates class will result in you getting an OrderedUpdates and a warning. Trying to use theano.function(updates=dict) will issue a warning.

TODO: duplicate with theano.sandbox.linalg.ops.* tensor_var.{diagonal,conjugate} theano.tensor.{diag,diagonal} * https://github.com/Theano/Theano/pull/1012 CudaNdarray_prep_output(CudaNdarray ** arr, int nd, const int * dims) (Ian G) * https://github.com/Theano/Theano/pull/1093 fgraph.name == fn.name (Ian G) * https://github.com/Theano/Theano/pull/1089 cross-entropy opt work when specify_shape is used (PL) * https://github.com/Theano/Theano/pull/1086 1090 c_code for SpecifyShape op(Fred) * https://github.com/Theano/Theano/pull/1081 debugmode print more info when there is an error (Fred) * https://github.com/Theano/Theano/pull/1087 crash fix about dimshuffle (abalkin)

doc David, abalkin, Amir Elaguizy, Olivier D.

Clone this wiki locally