Unless you have intentionally decided to block any news around software in your social feeds, it is likely that you have heard about Docker.
I have written a few posts around Docker and how you can get started with it, but those were more from the point of selecting an OS to learn Docker and so on. Nothing about the details. In the meanwhile, there are tons of excellent resources available to learn Docker from scratch and I have been lucky to read those resources and learn from them.
Here are the list of books that I plan to read up in the next 6-10 weeks. I will review them as I finish them up.
If you feel there are similar books to this list here, please share them in the comments. I would love to read them if possible.
I recently attended GCPNext in San Francisco, where Google announced some of their latest stuff around Google Cloud Platform. One of the things that struck me the most from the conference was that Google is now focusing on its strengths in certain niche areas to carve out its own identity in the public cloud and win developer mindshare. Two areas were repeatedly stressed and they were the Google Cloud Data Platform and Machine Learning.
Google Cloud Data Platform provides an amazing fully-managed infrastructure to deal with your Big Data projects. BigQuery and Dataflow are its jewels there and if you have not used it, you owe yourself one. As an example, check out Google Developer Expert Graham Polley‘s recent article on “Creating a Serverless ETL Nirvana using Google BigQuery”
Machine Learning was also big at GCPNext and given the vast amount of data that Google possesses over the last decade and more, it should come as no surprise that some of its machine learning models will be the most accurate for a wide range of use cases. What is interesting to see is that while it provides a platform for everyone to use , Tensorflow — it is also in the process of releasing ready to use APIs that tap into their powerful machine learning models at the backend. Machine Learning is not everyone’s cup of tea and most of us, having an API access to Machine Learning models will give us a huge jumpstart into making our applications smarter and address use cases that were previously almost impossible to solve.
One such API that it has released is Google Cloud Vision API that almost gives human eyes to your applications. It is a fairly capable API that provides label detection, safe search, logo detection OCR, landmark detection for your images. In many cases, the results are almost like magic. If you are looking to getting started with Google Cloud Vision API, try out the tutorial “How To Build a Monitoring Application using Google Cloud Vision API” that I recently published at ProgrammableWeb.
As part of its Machine Learning push, Google also announced a Machine Learning Platform and a Speech API, both of which are currently available only in Limited Preview.
Google also setup codelabs, where attendees could try out multiple features on Google Cloud Platform. You have full access to these codelabs. Try them now.
As part of Mumbai Technology Group, we celebrated Docker’s 3rd Birthday and I decided to play the role of a mentor.
Prior to the event, we received good support from Docker in terms of the material to present, the hands-on guide that participants were going to work through at the event and guidance on how best to go about doing the event.
We were one of the early events in the cycle and it was good to spend some time to the Github project for Docker 3rd birthday and ensure that things were streamlined and the experience for the participants was as smooth as possible.
If you plan to conduct a workshop to introduce folks to Docker, I strongly advise to give the material a try. It is available at the following repo. You will also see the presentation deck for the same at the same site.
I want to thank everyone at Docker and Augustine, our organizer, who does a fab job as usual, as organizer of Mumbai Tech Meetup group. A big shoutout to my fellow mentors, Maninderjit Bindra and Raza Syed. Together we were able to get a lot of folks going with Docker and eventually push their final images and by looking at dockerize.it, you will notice that we did quite well.
I attended GopherCon India 2016 last week and it was a great learning experience for me. One of the talks in the conference was on Minio by their CEO, AB Periasamy. This blog post is not about Minio, which by the way makes a fantastic, simple to install and run “Distributed Object Storage Server”. This post is more about one comment that he made towards the end in his talk that has been constantly ringing in my head since that day and I thought it best to put it down.
AB mentioned that Minio is an Open Source project and contributors are most welcome. But then he made a statement that went something like this : “I will be more than happy to get contributions to the project that tell us what we should remove V/S what additions we should do”. That statement does resonate with the Go community and especially the core Go Language team that has kept the language small, simple yet modern, powerful and easy to use. And I believe their design discussions might be more around what to keep out of the language v/s adding to the language and its core packages.
AB’s statement is something that I believe I should have been doing more with lots of software that I have written over the years. I can think of tons of features that we added after hours of discussions, hours of development which actually no one ever used, had low impact or never really went anywhere. Whether those options existed in our software was something that most users probably never even knew. Or to put it more bluntly, “did not even care about”.
I believe the time has come for more of our software to be simple, to have less options and for it to just work with minimal configuration or even smart assumptions in the code, that almost give it a brain of its own for the problem that the software is trying to address. After having developed software for 20 years, frankly I am getting tired of not being able to put some software to immediate use due to complex setup requirements, esoteric switches and what more. It has to just work and for that removing features from your product is one way of looking at it. Maybe even from existing systems!
Like my former editor at ProgrammableWeb, Adam DuVander, from whom I first learnt the term “TTFHW”, which stands for “Time To First Hello World” – I think it is more relevant today and the more quickly a developer can setup your stuff and get going with a version of Hello World for your software, the better off everyone will be. And thinking in terms of “Removing Features” could be one of your guiding principles.
I had the opportunity to attend GopherCon India 2016. This post contains my takeaway points from most of the sessions. I have done my best to include the links to each of the presentations and could be missing some but I will update them as the speakers put them up on various media.
Static Site Generators are all over the place. One of the best in class generators is Hugo. The idea behind Hugo is to make website creation fun again and it lives up to that. Equally important is an exercise that you need to do on your own to determine if you really need a dynamic website with all the heavy baggage that comes with it.
Machine Learning is all over the place and rightly so. With the tremendous amount of data, algorithms and computing power that is available at our disposal today, we are beginning to see a clear shift where the tools and services are now available to all developers (individuals and organizations).
It is not easy to get going with Machine Learning and the task is not to be underestimated. You need expertise in several areas coupled with a mentality that combines persistence and dedication to solving the problem (and in some cases your final results will still conclude that the experiment failed). In addition to developer chops, you need skills in data processing, statistics and an understanding of a particular Machine Learning platform that you plan to use.
Often the difficult part is to kick start your understanding of this domain. I know a bit of machine learning and have used the Google Prediction API on a project and as a student and teacher, I am always on the lookout to see how anyone approaches the very difficult task of explaining a complex subject to a general audience. In my opinion, this is very difficult. People often conclude that the talk was very basic but I can challenge anyone to explain a complex topic in a few minutes that at least gives the high level picture very clearly and then leaves it to the student (who is hopefully curious by now) to take it to the next level.
One such introduction that I came across this week was at Pluralsight in a course titled Understanding Machine Learning by David Chappell. This is a 40 minute video course only (yes 40 minutes!) and it has a great introduction to the subject that I thought could be understood by anyone remotely connected with the software industry. The key processes and terms were explained via short and concise examples that drove home the point.
Take a look at it if you can. If you wanted to get the basics on what ML is all about, the processes and what it involves, this is a great introduction. It should get you curious enough to then start exploring ML libraries/tools in a language of your choice.
P.S: The 40 minute introduction is good enough for you to actually start understanding the image that you see in this blog post.