100 thoughts on “Machine Learning Zero to Hero (Google I/O’19)

  1. If you don't like Python for some reason, you can write basically the same code in C#. See https://habr.com/post/453232/ for an example

  2. One thing that always confused me the initialization of those random factors like how many neurons should I put what activation functions should I keep for certain layer and most importantly what would be the architecture of my whole NN for performing a certain task can anyone help and throw some knowledge on these ?

  3. Can you please share that code with me which you used in jupyter notebook ? I would be very happy for that.

  4. Wow .. this was an awesome presentation, I'll focus on the first speaker since I was somewhat familiar with the subject (I also liked the second one).

    He explained all of the key concepts as clearly as I have seen – and I have looked at a LOT of videos, this is a master class in how to do it. I hope he presents on many more related topics. Definitely worth watching.

    Thank you so much!

  5. I've been waiting and waiting thinking the only way to properly learn machine learning is through university courses. I've been wrong. I appreciate the work you guys are doing to motivate the next generation of programmers to understand that machine learning is something anyone can understand and use as long as you've put the effort in. Thanks guys 🙂

  6. Just FYI…
    Henri Poincaré, French mathematician: https://en.wikipedia.org/wiki/Henri_Poincaré
    How to pronounce "Poincaré": https://www.youtube.com/watch?v=zYF7UFY40iE

  7. I love the idea, but installation using Anaconda with Windows makes Spyder and/or Anaconda prompt unresponsive. This problem also occurs when using Keras because TensorFlow is the back-end. I think I've got a working set up now but this was after 2 un-install and re-install attempts.

  8. Laurence Moroney is a golden god xD Concise and well presented.
    Thanks for your inputs, cleared up all my foggy areas on this.

  9. Nice explanation about why we use CNN to get features applying different filters rather than just flatten the image pixels as input,but why she is standing back there all the time

  10. In case anyone is interested, this is the link to the notebook used during the presentation: https://github.com/lmoroney/io19/blob/master/Zero%20to%20Hero/Rock-Paper-Scissors.ipynb

  11. So they put in a bunch of answers, the machine leaning catalogs those answers with similar attributes, and creates rules which constitute the best answer concerning the outcome of similar answers against each other.

  12. Thank you for the useful video. I am curious about deep learning and your explanations showed me a lot of useful insights.

  13. Guys presentation – logical, step-by-step in easy language. Girls presentation – filled with special slang, with logical jumps, horrible.

  14. I remember when Laurence taught in Coursera "Tensorflow in practice" course how to recognize that same rock, paper and scissors images, and in the way that he explained it was pretty easy to implement. Excellent teacher!

  15. Nice explanations and tools. I think mainly why I am struggling to make use of machine learning is because I am attempting to implement it from scratch.

  16. (Aside from the talk being great) The presentation screen is awesomely beautiful!! Imagine seeing this for the first time, even just from year 2000.

  17. I’ve been looking into ML & Supervised Learning. Looking to make a connection with a PU professor

  18. Super presentation! Nice explanation on concepts in Machine Learning using TensorFlow, 'Convolutional layer,' and 'Pooling'.

  19. Imagine if you were to feed millions of people’s medical records, all known medical research, etc. into one of these models.

  20. Finally, I got this shit. I have been watching various speeches and talks for the past two years and every time I thought I got it, I didn't. Of course, until I stumbled upon this video. Now I know the concept enough that I can explain it to both technical and none technical people. Thanks for the wonderful video.

  21. Which one is the course on Coursera? I might do it as soon as I finished my current course I'm doing. Thanks for the video.

  22. its simply awesome, i been using jupyter note book, and it has cleared my many queries running in my head, and lot more still remaining,. Good work.

  23. You don't need tensor flow you can use Excel and Java . And it's incorrect to say that it's hard to code it in java. The important part is the training data and lots of it. This is just a Google ad but at least the audio is good.

  24. Hi  This man is amazing , I would like to say he just remind me with old days in faculty of science , 1992:as we in  normal programming we have function f  passing it a value x  ,then waiting for the result Yi.e  Y =f(X)However , in ML we have both values of  X & Y and ask the machine to suggest the most accurate function f(X) . Thank you , I knew you and  checked out  your courses at coursera .

  25. Whats the purpose of multiple convolutions? So you do the convolution to highlight features and simplify+shrink the input data, but what are the following convolutions do? Are they applied to the already modified data, or to the unaltered clean one?

  26. awsome tutorial ever sir. I like the way you explain things up to the novice level. Hope you will help me solve the following error I am getting when I try to run it on jupyter notebook.

    import SimpleITK as sitk

    import numpy as np


    This funciton reads a '.mhd' file using SimpleITK and return the image array, origin and spacing of the image.


    def load_itk('C:UserstekSampleNodule'):

    # Reads the image using SimpleITK

    itkimage = sitk.ReadImage('C:UserstekSampleNodule')

    # Convert the image to a numpy array first and then shuffle the dimensions to get axis in the order z,y,x

    ct_scan = sitk.GetArrayFromImage(itkimage)

    # Read the origin of the ct_scan, will be used to convert the coordinates from world to voxel and vice versa.

    origin = np.array(list(reversed(itkimage.GetOrigin())))

    # Read the spacing along each dimension

    spacing = np.array(list(reversed(itkimage.GetSpacing())))

    return ct_scan, origin, spacing

    when I run it is saying, File "<ipython-input-3-aceebef52c30>", line 7

    def load_itk('C:UserstekSampleNodule'):


    SyntaxError: invalid syntax

  27. 11:55 what happens at the borders of the image?
    e.g. you wanna calculate the "next pixel value" (pixel value for the convoluted layer) and you take the pixel [0;0] — if you now want to apply the filter to the surrounding pixels you'll read from the image pixel -1 to 1 in x and y direction. Is Tensorflow repeating the borders? Does it fill with zeros or ones? what is happening here and how can you specify it?

  28. Very enlightening talk.

    How long an ML noob like me who is rather experienced Android Kotlin dev would take to create a similar image recognizer app?

Leave a Reply

Your email address will not be published. Required fields are marked *