Saturday, June 22, 2019
Upping my code editing game
clang-format
autocomplete & more in emacs
Friday, June 14, 2019
Making sense of online ML courses
Check out all these different links to Stanford’s CS229 course. They seem to be different versions of the same course.
http://cs229.stanford.edu/syllabus.html
https://online.stanford.edu/courses/cs229-machine-learning
https://www.coursera.org/learn/machine-learning
http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=MachineLearning
https://see.stanford.edu/Course/CS229
MIT stuff
http://introtodeeplearning.com/
MIT course references
https://docs.google.com/spreadsheets/d/1jtdtJHXZPbVSIT2xxF18OkQmwjfZrraAKai9e90mfeU/edit#gid=0
Sunday, June 09, 2019
ITK python wrapping for plastimatch
Discourse discussion
https://discourse.slicer.org/t/python-wrapping-of-plastimatch/6722/2
ITK module template
https://github.com/InsightSoftwareConsortium/ITKModuleTemplate
https://blog.kitware.com/python-packages-for-itk-modules/
https://itkpythonpackage.readthedocs.io/en/latest/Build_ITK_Module_Python_packages.html
Relevant section of the ITK software guide
https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch3.html#x39-440003.7
https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch9.html#x55-1520009.5
Consider also
Friday, June 07, 2019
Tensorflow on Shebert
The hardware on sherbert is a Titan XP, compute capability 6.1
Currently we have package: nvidia-driver (390.116-1)
Backports has a package nvidia-driver (418.56-2~bpo9+1). This could be installed to support CUDA 10.1.
Debian stable/backport has only package nvidia-cuda-dev (9.1.85-8~bpo9+1)
This is not acceptable, because cuDNN does not support this version
Instead, install NVIDIA CUDA 9.0.176 for compatibility with cuDNN
cuDNN 7.6.0
https://docs.nvidia.com/deeplearning/sdk/cudnn-support-matrix/index.html
We therefore want Tensorflow 1.12.x
Which can be installed from package
pip install tensorflow-gpu==1.12.2
To test, run the following
import tensorflow as tf
print (tf.test.is_built_with_cuda())
print (tf.test.is_gpu_available(False,6.1))
There will be an exception for non-cuda available GPUs, so limit the available device as this:
CUDA_VISIBLE_DEVICES=0 python tftest.py