Recent posts

Joe Rogan Experience #1391- Tulsi Gabbard & Jocko Willink

Tulsi Gabbard, Jocko Willink and Joe Rogan discuss US politics. And it was an awesome debate! Finally a politics discussion without malevolence.

These hyperpartisan times we live in are tearing at the fabric of humanity; dilating people’s egos to avert pattern changes, and dividing some over ludicrous issues. Let’s not forget that media is the cheerleader of these conflicts because conflict is good for ratings! We are better than that.

Few take away snippets

The gov is not a great solution to most of our problems. The solutions are balanced.

Jocko Willink

Everything is broken down into little sound bites. You are either pro this or against that.

Jocko Willink

It’s like if you believe yes and I believe no then I will attack you. Who cares about Political belief alignment when we can have so many things in common (surfing, training martial arts etc…) and still hang out.

Jocko Willink

Our ego pins ourselves into a corner. We should be looking at other people’s perspectives to understand where they are coming from.

Jocko Willink

Every News story that comes out is… “THIS IS THE END OF THE WORLD!”

Jocko Willink

America is stronger than one man!

Jocko Willink

We think that some news event that we can fully understand in an hour or a tweet. We need to let these things develop and see where the actual long term effects are. We can’t be snapping judgement and making split decisions when we need to assess what’s really going on. The press is just snap decisions, snap decisions.. very comical.

Jocko Willink

Media are the cheerleaders of these conflicts because conflict is good for ratings!

Jocko Willink

Tulsi Gabbard is a 2020 Presidential Candidate of the Democratic Party and is currently serving as the U.S. Representative for Hawaii’s 2nd congressional district since 2013. https://www.tulsi2020.com/

Jocko Willink is a decorated retired Navy SEAL officer, author of the book Extreme Ownership: How U.S. Navy SEALs Lead and Win, and co-founder of Echelon Front, where he is a leadership instructor, speaker, and executive coach. His new book “Leadership Strategies and Tactics” will be available in January 2020. https://www.youtube.com/channel/UCkqcY4CAuBFNFho6JgygCnA

The Great Pumpkin: Silicon Valley’s Best Pumpkin Patch

The month leading up to Halloween (31 October) becomes a Pumpkin fest full of festivities with empty spaces and farms turning into pumpkin patches for all ages.

It’s no secret Americans sure love celebrating Halloween. You don’t even have to check the calendar. At the first public appearance of a pumpkin looking at you outside someone’s (home: pumpkin houses) or at the local supermarket (Safeway, Trader Joe’s, Whole Foods Market etc) you will instantly be reminded it’s October and Halloween is near.

If you have kids then this will be a busy time for you. From scary pumpkins to (ghosts: link to fake ghost house) to dress ups to pumpkin patches. Your month as a parent is set. And best part is you’ll enjoy it with them. Especially the pumpkin patches.

There are a ton of pumpkin patches you can visit. The city ones are often found on the El Camino highway in the form of a pop up venue. They are ok. Smallish but convenient to drop by to choose a big pumpkin for d-day. The pumpkin patches located on a farm just shy of the Silicon Valley boundaries are gold!

Enter the Spina Farm Pumpkin Patch

Spina Farms Pumpkin Patch is located in beautiful Coyote Valley at the corner of Santa Teresa Boulevard and Bailey Avenue. It features tractor rides through the corn fields, mini train rides for kiddos, Area 51 corn maze with pumpkin blasters (pumpkin loaded bazookas), live music and of course thousands of pumpkins all shapes and sizes. You pickup a barrel, and go for your life picking the crop of the lot. At checkout don’t forget to grab yourself a soft juicy pumpkin bread.. leave some for me 😉

I’ll let the photos do all the talking but as a final note why I love thus place. It feels authentic— like a real farm away from all the chaos of Silicon Valley. Just shat the doctor ordered lol. The freshness is in everything you see and smell. Like busing a forest.

Photos from our visit there few days ago

What’s your favorite Pumpkin Patch in and around Silicon Valley?

Recover gigs of storage with this simple hack

I recovered 30 gigs of storage on my MBP (MacBook Pro) with this simple hack of converting my PNGs to JPEGs. Here’s how.

If you own a Mac or an iPhone, you’ll notice both devices save “screenshots” with a PNG extension. Portable Network Graphics (.png), an image file format is uses lossless compression to save your images. You maybe thinking this lossless compression is the best format but the reality is JPEG, a lossy compression, is just as good at retaining image quality at a MASSIVE fraction of the size of a PNG.

Image sizes really matter when your device runs 1080p+ resolutions (most modern Macs) or new OLED iPhones. A full size (no crop) screenshot on my MBP with a resolution of 2080×1080 yields a 7MB (7168kb) image. A JPEG equivalent is 500kb. That’s ~14x smaller. If you are like me and take screenshots as reminders or todos (GTD baby!) then you’ll be chewing through storage fast.

100 of these PNGs and you’ll be reaching the 1Gb territory.

With storage being dirt cheap who cares right? Not so. Laptops with external drives are annoying and iPhones do not have extension cards. Furthermore, data transfer is a burden consider upload speeds are always conveniently ignored yet important if you are backing up to the cloud. And let’s face it, who’s got time to sit around waiting when the same lot of photos with identical quality (assuming you aren’t blowing them up on a wall) can be backed up to your Dropbox cloud storage 14x faster. GTD baby!

Did I mention this will also speed up your Spotlight searching, indexing, extend your SSD life and open those images faster.

Convert PNGs to JPEGS

PNG are uncompressed images from things mainly like Screenshots on your Mac or iPhone. They don’t need to be PNG unless you really are picky about the quality of text sharpness. ie. Under JPEG text becomes a tad more blurry since compression reuses surrounding pixels to make image smaller.

Overall, the chance of you retaining PNGs is low unless you do a lot of photo editing and need that pixel level detail especially for font/text clarity.

[1] Identify Opportunities

Run a scan to identify where the opportunities (PNGs) are located on your drive.

$ find . -type f -iname '*.png' | wc -l

find . -type f finds all files ( -type f ) in this ( . ) directory and in all sub directories, the filenames are then printed to standard out one per line.

This is then piped | into wc (word count) the -l option tells wc to only count lines of its input.

If you only want files directly under this directory and not to search recursively through subdirectories, you could add the -maxdepth flag:

$ find some_directory -maxdepth 1 -type f | wc -l

The key to that case-insensitive search is the use of the -iname option, which is only one character different from the -name option. The -iname option is what makes the search case-insensitive.

[2] Convert

Create JEPG versions of the PNGS and remove old PNGs. There is no need to keep the old PNGs. They are the ones that take up all the space.

Run this on a small subset of your PNGs to make sure you are happy with the resulting JPEG.

$ mogrify -format jpg *.png && rm *.png
$ mogrify -format jpg *.PNG && rm *.PNG

or convert and keep the original PNG

$ mogrify -format jpg *.png

[3] Celebrate

How much space did you recover?

Who really owns your Bose QC35 headphones?

I was excited to finally get my hands on the new Bose QC35 II because noise simply annoys me more than the average bear. The beautiful world we live in today is very noisy, from cars to traffic lights to photocopiers to background chatter, and it’s something some of us have learnt to live with while others suffer from the disruption. I’m in the latter crew. Until that it, a $300 Bose QC35 II became my friends, even if it was for a short period of time.

During this short period of time they were amazing. The ANC (Active Noise Cancelling) was superb! I was so excited I started showcasing (marketing for Bose) to all my software engineers and entrepreneurs, who like me seek silence to do their focused work, how their lives will change. I also sold my wife on these as a solution for plane travel. Noise inside planes reaches 80db, the sound of a vacuum cleaner near your ears on a trip from San Francisco to Sydney has shown over time to damage ear drums.

That is until I upgraded to firmware 4.5.2.

Enter the Firmware

You see, the Bose QC35 II has a computer inside which uses the multiple microphones places strategically to listen to incoming noise and cancel out the sound waves. This is orchestrated by a small onboard computer (think Arduino) running custom Bose software to run and manage the hardware. Hence ACN. Software has bugs. Even production versions. Thus is the nature (complexity) of the beast. And manufacturers will send updates over the internet to patch things up.

I have no idea why I installed the firmware update since the headphones were working flawlessly. I’m sure it was from habit; an expectation of better things to come from an update. Just like when I update my iPhone or MBP I get better performance and maybe few new bells and whistles (features).

Sound Quality Degradation

After the update, the noise cancelling quality of my QC35 II was degraded. I sat there in the library hearing the photocopier and background chatter. Something I could never hear before. WTF! I tried the 2 ACN noise levels (high and low) and both were indistinguishable.

There was something wrong with the v4.5.2 firmware update.

Source: Bose Update RUINS Noise Cancelling??? (TESTED) — https://www.youtube.com/watch?v=yyC9QStmzcA&feature=youtu.be

Whether intentional or not, one has to question whether Bose took the $9 an hour engineer outsource route (Boeing is famous for doing so with their 737 MAX MCAS) because something like this surely could not happen if they owned the whole release process and had QA. However the timing of these degrading version updates coincides with the more expensive Bose Noise Cancelling Headphones 700 release. Coincidence or not I’ll leave this to the conspiracy experts to debate.

Next Steps

  1. Downgrade downgrade your Bose QuietComfort 35 II from 4.5.2 to 3.1.8. Yes it’s a tad complex but unfortunately Bose doesn’t support this, nor do they even explain what each version contains, so do this at your own risk.
  2. Send it back to Bose for replacement/repairs; but good luck. The customers who did say the returned units were just as bad.
  3. Leave your views/complaints on the Bose Community website to hopefully make them acknowledge this and fix it for good. Go here: https://community.bose.com/t5/Around-On-Ear-Headphones/Bose-QC-35-ii-firmware-4-5-2/td-p/213820

So who really owns your Bose QC35 headphones?

Bose.

They are the puppet master here. Controlling at will the quality of the headphones you paid them handsomely for.

Commanding a premium for average quality sound gear with what used to be amazing ACN, then manipulating the quality of their ACN moat through ghost version updates to prop new cheaper build products (*cough* Bose 700) by degrading previous generation units.

If you own the QC35 please let me know how your experience has been so far.

Garage startup: Everything starts as nothing

I just remembered something that needs to be bedded down as a post. We all know this very well, and have seen the picture below, but yet we forget. When we forget this basic truth we start behaving in an unsound manner, looking for big words to fuel the story our ego is improvising to. Sometimes it’s best to catch ourselves and remember that everything starts as nothing. New things go 0 to 1. And this is also how most startups are born; the garage startup.

The first short story comes directly from the corporate world. An individual I was speaking with let their belief system run amok and painted a picture that the company they are employed by is superior and expect the same size company to be dealing with (ie. not startup). Even throwing in a derogatory statement like “a man in a garage coding” to generally refer to companies they do not like. hmmm…

The 2nd story comes from the world of Facebook Groups where a group of bookkeepers started talking about software companies they use to provide tools for their clients. One explained the horror she felt when she learnt one of the companies she was speaking to is a “dude in a garage”. *insert roll eyes emoji*… yeah.

Deflating the ego

If you are ever in such a situation please remind these lovely people that everything starts as nothing.

It’s easy to get a job and be part of something someone else laid the foundations to. What’s more special is starting something yourself. But it takes guts, grit and a go getter attitude to start something from 0 to 1 without a safety net of it working out.

Apple, Google, Amazon, and countless others all at one point were garage startups. I’m sure every company has a story of starting from nothing. The struggles. The courage. And for some the breakout. Here is that famous Silicon Valley picture to remind us of this fact.

Famous examples of Silicon Valley startups having started in the garage

Big things have small beginnings.

Billionaires Jack Ma vs. Elon Musk debate in Shanghai China at World Artificial Intelligence

Recently, 2 Billionaires debated in Shanghai China about World Artificial Intelligence; Jack Ma the founder of Alibaba and Elon Musk aka Tony Stark, former cofounder of PayPal and currently a founder of Tesla, SpaceX, Hyperloop and Neuralink.

The robots are coming!

AI is in the news everywhere you look. The spectrum of predictions is entertaining and somewhat hilarious; from sentient Skynet like robots to the singularity. One thing is certain, predicting any future event is a cognitive bias we all should keep in mind. It’s nice to tickle our senses and day dream, but the reality is no one knows what the future looks like.

Recently, 2 Billionaires debated in Shanghai China about World Artificial Intelligence; Jack Ma the founder of Alibaba and Elon Musk aka Tony Stark, former cofounder of PayPal and currently a founder of Tesla, SpaceX, Hyperloop and Neuralink.

This was a rather weird interview. Elon Musk had to start the interview even tho he was invited to Shanghai to discuss AI with Jack and then there is the entertaining polar opposite views about the future of AI.

Enjoy!

Full Interview

OR watch this 5 mins version version which illustrates how weird this interview was.

General Intelligence vs Specialized Intelligence

There is a HUGE difference between general intelligence and specialized intelligence. It’s illogical to compare human (general intelligence) to specialized intelligence (machine task based intelligence) UNLESS we compare tasks each can/do perform and just like that… machines outperform humans.

Elon Musk was giving examples of specialized intelligence eg. Deep Blue, iPhones, are all number crunchers, which YES outperform humans. Just like most manufacturing robots outperform humans. Jack Ma’s reply that no machine invented humans is a straw man argument and illustrates how little he knows about AI. I guess money talks after all.

Will we ever create general intelligence? No one knows. What is certain is;

(a) we still don’t understand how the brain works,
(b) neural networks in machine learning only scratch at the basics of how neurons work in the human brain and
(c) let’s not forget the vastness of venture backed companies faking AI using offshore labor to fake specialized intelligence.

So let’s not get caught up in robots/AI taking over our jobs.

Videos from re:MARS

I had a blast in Last Vegas at Amazon’s re:MARS Conference for Machine Learning, Automation, Robotics and Space. Below are some of the videos I recorded from the event. Hope you enjoy them!

Thank you Amazon for inviting Team Veryfi (my company) to attend this wonderful event! We had a blast, learnt a lot and rubbed shoulders with many smart folks.

Now onto the videos!

New Shepard (Blue Origin) Space Capsule Experience at #reMARS

Robert Downey Jr. at Amazon’s #reMARS in Las Vegas

Robert Downey Jr. appeared at Amazon’s re:MARS artificial intelligence conference in Las Vegas to entertain the crowd of engineers and scientist and make an awesome announcement. Robert, in true Tony Stark (aka Iron Man) fashion, announced Footprint Coalition to Clean Up the World With Advanced Tech.

Jeff Bezos Keynote Fireside Chat at #reMARS with Jenny Freshwater

Jeff Bezos and Jenny Freshwater (Amazon’s Director of Forecasting) speak on stage on June 6, 2019 at re:MARS conference for Machine Learning, Automation, Robotics and Space ran in Las Vegas (California).

New Drone from Amazon Launch at #reMARS

Unveiled at reMARS in Las Vegas (California), an Amazon conference for Machine Learning, Automation, Robotics and Space.

The NEW Amazon Prime drone with hybrid architecture to fly like a plane and hover like a traditional drone with propellers. Amazon Prime delivery on auto-pilot at light speed anywhere in the world (give or take ;-))

Boston Dynamics CEO demo’s Spot at #reMARS

Video recorded at reMARS Amazon’s conference for Machine Learning, Automation, Robotics and Space ran in Las Vegas (California). Boston Dynamics CEO Marc Raibert showcasing Spot, its first commercial robot.

SPOT

Spot is a small four-legged robot that comfortably fits in an office or home. It weighs 25 kg (30 kg if you include the arm). Spot is all-electric and can go for about 90 minutes on a charge, depending on what it is doing. Spot is the quietest robot we have built.

Spot inherits all of the mobility of its bigger brother, Spot Classic, while adding the ability to pick up and handle objects using its 5 degree-of-freedom arm and beefed up perception sensors. The sensor suite includes stereo cameras, depth cameras, an IMU, and position/force sensors in the limbs. These sensors help with navigation and mobile manipulation.

More about Spot: https://www.bostondynamics.com/spot

How Amazon Go AI sees customers engaging in store

Video from reMARS keynote explaining the challenges of computer vision at Amazon’s Go self checkout stores.

And finally we ended the week long conference with a big part put on by amazon at the Las Vegas Speedway.

Furrion Exo-Bionics Mech pulling Engine 3 Fire Truck at Amazon’s #reMARS in Las Vegas

This mech is powered by a human and his hands and legs. Strong enough to pull a Fire Engine! Engine #3 from Las Vegas Fire Department. Love it

Learn more about this mech: https://furrion.com/pages/exo-bionics

About Veryfi

Veryfi is automating bookkeeping, starting with automation of time & materials for architecture, engineering & construction (AEC) workforce. We help businesses of all sizes to get access to Veryfi’s intelligent and secure mobile apps to:

  • automate statutory tax obligations (recording of financial transactions, labelling & categorization),
  • improve job costing and
  • empower financial prosperity through business intelligence.

Learn more: https://www.veryfi.com

Christian Von Koenigsegg: A man and his dream

I’ve known about the Koenigsegg cars for a while. As a fan of fast cars, and having built & raced one myself, it’s only natural to be inspired by the nature of fast automotive. There is an art to building race cars which today I appreciate and often contrast to building a business. Having cofounded 2 x venture backed startups in Silicon Valley inc. (inc one backed by Y Combinator) I find the parallels similar yet different.

Recently I was watching a famous VLogger Mr JWW interview Christian Von Koenigsegg on the new Koenigsegg Jesko Hypercar. The name Jesko is a tribute to Christian’s father, Jesko von Koenigsegg. The Jesko is a high-performance track car, with focus on high aerodynamic downforce and more precise handling. Power 941 kW; 1,262 hp from a 5 L (5,032 cc; 307 cu in) twin-turbocharged V8 with a cost close to $3m.

The video interview struck me because of the detail and passion Christian has for his business developing unique hypercars from the ground up. Christian knows all the intricate detail that make up the cars his company manufacturers. He is a walking talking car dictionary. Not just cars. He is an innovator. Most parts inside the Koenigsegg are designed and built by Koenigsegg. Not sourced from 3rd party suppliers like other manufacturers would do.

So I had to dig up some info and learn more about him and his company. Off to the official Koenigsegg website I went.

About Christian Von Koenigsegg

Christian makes hyper cars that rival the Bugattis and Ferraris of the world. Everything in the Koenigsegg car is custom made. From gearboxes to turbos to engines. His latest gearbox allows the driver to go from gear 9 to 3 in an instant to pull maximum power from engine. The computer supports the driver by making sure that the gear user selects is not going to blow the engine. His cars cost $1-2m on avg. His company has ~200 employees.

It’s get’s better!

What’s more amazing is this story on the about page found on his company’s website.

“In 1991, he invented a new solution for joining floor planks together without adhesive or nails. He called it Click, as the profile enabled the planks to simply click together. Christian presented this technology to his father-in-law in Belgium, who ran a flooring factory. He rejected the idea, saying that if it was viable, someone would have come up with it a long time ago. Christian then showed the concept to a few other floor manufacturers who also dismissed it. In 1995, a Belgian and a Swedish company patented the exact same solution as Christian’s Click floor – they even called it Click! This innovation has now turned into a multi-billion dollar industry…”

https://www.koenigsegg.com/christian-von-koenigsegg/

The moral of this story

The moral of this story; don’t fuckin listen to people who tell you it cannot be done. Push forward and believe in yourself. Because you never know what one day could come of that/those crazy ideas.

If you don’t do it, someone else will.

Regret is much harder to live with than investing some of your time to test ideas and prove others wrong about your contrarian ideas.

Go forth and play!

More reading

Your Twitter account has been locked

I should have known better than to change the date of birth on my company’s Twitter Account Veryfi to the company’s inception date 2 years ago. Without warning or any confirmation Twitter immediately declared that I must be underage and locked the company account.

As panic set in, I explored my options. Oh I can prove to Twitter my identity by submitting my drivers license to them. But how will Twitter know I really own this company account if my identity is no tied to the Veryfi Twitter account? I shrugged and decided to share my drivers license.

Little did I realize that their form was also broken and kept on rejecting the drivers license photo followed by disabling the “Upload image” button. A company with a market cap of $30B based in the heart of SF cannot do basic QA (quality assurance) on a feature that serves justice? No way! In the end I managed to get it working. There, sent.

While my company also owns a Youtube account on which we explain about our services and products, I have never come across such a situation on it. When we started the channel we did take the services of a company to buy views as buying youtube views might feel like cheating, but it’s really not and ever since then our channel has been booming in views thus helping the business to grow drastically. In other words, the Youtube algorithm and automated response is way better than Twitter, and you can see in the below image why I said that.

Twitter’s broken form to prove your identity by sending your PII to who knows where.

Few hours later, silence.

I check gmail.

A sentence has been served…

At this point I had enough. But, let’s rewind this a tad to reflect on this problem.

How to destroy trust

Jack (CEO of Twitter) gained my trust for Twitter when he came back to Joe Rogan(JRE Podcast) for the 2nd time to cleanup the mess of his first appearance. Kudos to Joe Rogan for reeling him back in and for Jack to face the music. Jack arrived with his legal advisor Vijaya Gadde. Tim Pool(an American journalist, YouTuber, and political commentator) also joined to spice up the discussion.

During the interview, listening with laser sharp focus, I gained more empathy and respect for Twitter and it’s daily challenges. A platform that is juggling free speech, propaganda and fake news. It is a bloody tough job! Peoples emotions run wild. We all have something to say.

How does one determine truth from fact?

When does one hand down a sentence?

“ei incumbit probatio qui negat, non qui dicit” — Presumption of guilt, in Latin

Presumption of guilt is a default position based on pessimism and suspicion whereas presumption of innocence is based more on optimism and trust. So which playbook does Twitter really play?

But I disgres.

Tim Pool drilled Jack and Vijaya around the decision making process before killing Twitter accounts. “Where does machine logic and where does human intervention come in?” Of course the answers were colorful but Jack made bold promises for a better review processes and more care before killing accounts.

I was sold. I now gave Twitter more love. They had a hard problem to solve and their CEO had committed to fixing it.

Until, months later, when I declared my company’s twitter age to be 2. <insert-police-lights-here>

Should computers run the world?

Strangely enough, the night prior I was glued to Royal Institutes lecture by Hannah Fry(a British mathematician, author of Hello World) speaking about machine bias in decision making. You can catch it here on YouTube: https://youtu.be/Rzhpf1Ai7Z4

Hannah presented a fascinating topic. She questioned the audience to see who would let machines dictate their fate in a legal system where machines are used to do just that. Being a software engineer by degree and a 2nd time entrepreneur I declared that I rather let an algorithm decide fate. My view was that due to a more logical approach, minus the human emotion bias that occurs during heated trials, a machine will lead to a fair outcome.

“Ladies and gentlemen, please return your seats and tray-tables to their full upright positions, and extinguish all smoking material, as we’re about to land in the red zone. Ahh! No survivors!” — Fight Club

Hannah, I was wrong!

Where to from here…

Back to the case at hand.

The story is Twitter decided to ban Veryfi’s Twitter Account https://twitter.com/veryfinance because I wanted to tell the truth and tell them the company’s actual date of birth was 2 years ago. Twitter’s logic was as simple (and stupid) as this:

IF age(dob) < 13 THEN
LockAccountImmediately('DoNotConfirmWithUser')
SendDeathNote('FewMinsLater')
ENDIF

12 hours later… I think Twitter actually deleted Veryfi’s Twitter Account without any due diligence.

Not a good sign when you visit your company’s twitter account (https://twitter.com/veryfinance)and see this.

Earlier in the day I also complained to @TwitterSupport using my personal twitter account. I’m pretty sure it fell on death ears since no one acknowledged or offered to help.

Tonight I went for a walk to reflect on this issue. Walking past many famous companies based in Mountain View (CA); Giants that have stood the test of time. It lifted my spirit and got me thinking. Given Twitter’s pessimistic authoritarian style and hypocrisy, will they stand the test of time?

I recalled a wise decision my wife and I made (over 10 years ago) to eliminate the idiot box (TV) from our home. We have not regretted it since. Books have replaced the space where the idiot box used to live.

Do we need Twitter? I don’t think so. It saps our energy and time. No business, big or small needs Twitter.

And now I am at peace.

~ Ernest

PS. This article was also published on Medium: https://medium.com/the-road-to-silicon-valley/your-twitter-account-has-been-locked-7bff4e69300d

Links mentioned in the article

About Veryfi (banned from Twitter for being 2 years old)

Veryfi, Inc. is a California, US-based mobile software automation company founded in December 2016 and backed by Y-Combinator inc other prominent investors in Silicon Valley. Veryfi helps Architecture, Engineering & Construction (AEC) workforce of all sizes to get access to Veryfi’s smart mobile tools to eliminate 90% of time wasted doing data entry (& chasing records), improve job costing and empower their financial prosperity. To learn more visit: https://www.veryfi.com/

Dockerizing a web app, using Docker Compose for orchestrating multi-container infrastructure (part 1 of 3)

 

This is a GUEST BLOG POST by Andrew Bakonski

Head of International Engineering @ Veryfi.com

Andrew heads up International Engineering efforts at Veryfi supporting Veryfi’s Hybrid Infrastructure on AWS & Azure making sure the lights stay on.

Andrew’s LinkedIn: https://www.linkedin.com/in/andrew-bakonski/

A couple months ago we decided to move Veryfi’s Python-based web app onto Microsoft Azure. The process was complicated and involved several stages. First I had to Dockerize the app, then move it into a Docker Swarm setup, and finally set up a CI/CD pipeline using Jenkins and BitBucket. Most of this was new to me, so the learning curve was steep. I had limited experience with Python and knew of Docker and Jenkins, but had yet to dive into the deep end. After completing the task, I thought I could share my research and process with the Veryfi community.

I’ve compiled a three-part series that will cover these topics:

  1. Dockerizing a web app, using Docker Compose for orchestrating multi-container infrastructure
  2. Deploying to Docker Swarm on Microsoft Azure
  3. CI/CD using BitBucket, Jenkins, Azure Container Registry

This is the first post in the series.

I won’t go into a full blown explanation of Docker – there are plenty of articles online that answer that question, and a good place to start is here. One brief (and incomplete) description is that Docker creates something similar to Virtual Machines, only that Docker containers run on the host machine’s OS, rather than on a VM. Each Docker container should ideally contain one service and an application can comprise of multiple containers. With this approach, individual containers (services) can be easily swapped out or scaled out, independently of others. For example, our main web app currently runs on 3 instances of the main Python app container, and they all speak to one single Redis container.

Dockerizing an app

Note: the example included in this section can be found in this GitHub repo: https://github.com/abakonski/docker-flask
The example here is a minimal, “Hello World” app.

Docker containers are defined by Docker images, which are essentially templates for the environment that a container will run in, as well as the service(s) that will be running within them. A Docker image is defined by a Dockerfile, which outlines what gets installed, how it’s configured etc. This file always first defines the base image that will be used.

Docker images comprise multiple layers. For example, our web app image is based on the “python:3.6” image (https://github.com/docker-library/python/blob/d3c5f47b788adb96e69477dadfb0baca1d97f764/3.6/jessie/Dockerfile). This Python image is based on several layers of images containing various Debian Jessie build dependencies, which are ultimately based on a standard Debian Jessie image. It’s also possible to base a Docker image on “scratch” – an empty image that is the very top-level base image of all other Docker images, which allows for a completely customizable image, from OS to the services and any other software.

In addition to defining the base image, the Dockerfile also defines things like:

  • Environment variables
  • Package/dependency install steps
  • Port configuration
  • Environment set up, including copying application code to the image and any required file system changes
  • A command to start the service that will run for the duration of the Docker container’s life

This is an example Dockerfile:

FROM python:3.6

# Set up environment variables
ENV NGINX_VERSION '1.10.3-1+deb9u1'

# Install dependencies
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 \
    && echo "deb http://httpredir.debian.org/debian/ stretch main contrib non-free" >> /etc/apt/sources.list \
    && echo "deb-src http://httpredir.debian.org/debian/ stretch main contrib non-free" >> /etc/apt/sources.list \
    && apt-get update -y \
    && apt-get install -y -t stretch openssl nginx-extras=${NGINX_VERSION} \
    && apt-get install -y nano supervisor \
    && rm -rf /var/lib/apt/lists/*


# Expose ports
EXPOSE 80

# Forward request and error logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log

# Make NGINX run on the foreground
RUN if ! grep --quiet "daemon off;" /etc/nginx/nginx.conf ; then echo "daemon off;" >> /etc/nginx/nginx.conf; fi;

# Remove default configuration from Nginx
RUN rm -f /etc/nginx/conf.d/default.conf \
    && rm -rf /etc/nginx/sites-available/* \
    && rm -rf /etc/nginx/sites-enabled/*

# Copy the modified Nginx conf
COPY /conf/nginx.conf /etc/nginx/conf.d/

# Custom Supervisord config
COPY /conf/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

# COPY requirements.txt and RUN pip install BEFORE adding the rest of your code, this will cause Docker's caching mechanism
# to prevent re-installinig all of your dependencies when you change a line or two in your app
COPY /app/requirements.txt /home/docker/code/app/
RUN pip3 install -r /home/docker/code/app/requirements.txt

# Copy app code to image
COPY /app /app
WORKDIR /app

# Copy the base uWSGI ini file to enable default dynamic uwsgi process number
COPY /app/uwsgi.ini /etc/uwsgi/
RUN mkdir -p /var/log/uwsgi


CMD ["/usr/bin/supervisord"]

Here’s a cheat sheet of the commands used in the above example:

  • FROM – this appears at the top of all Dockerfiles and defines the image that this new Docker image will be based on. This could be a public image (see https://hub.docker.com/) or a local, custom image
  • ENV – this command sets environment variables that are available within the context of the Docker container
  • EXPOSE – this opens ports into the Docker container so traffic can be sent into them. These will still need to be listened to from within the container, (i.e. NginX could be configured to listen to port 80). Without this EXPOSE command, no traffic from outside the container will be able to get through on those ports
  • RUN – this command will run shell commands inside the container (when the image is being built)
  • COPY – this copies files from the host machine to the container
  • CMD – this is the command that will execute on container launch and will dictate the life of the container. If it’s a service, such as NginX, the container will continue to run for as long as NginX is up. If it’s a quick command (i.e. “echo ‘Hello world'”), then the container will stop running as soon as the command has executed and exited

The Docker image resulting from the above Dockerfile will be based on the Python 3.6 image and contain NginX and a copy of the app code. The Python dependencies are all listed in requirements.txt and are installed as part of the process. NginX, uWSGI and supervisord are all configured as part of this process as well.

This setup breaks the rule of thumb for the “ideal” way of using Docker, in that one container runs more than one service (i.e. NginX and uWSGI). It was a case-specific decision to keep things simple. Of course, there could be a separate container running just NginX and one running uWSGI, but for the time being, I’ve left the two in one container.

These services are both run and managed with the help of supervisord. Here’s the supervisord config file that ensures NginX and uWSGI are both running:

[supervisord]
nodaemon=true

[program:uwsgi]
# Run uWSGI with custom ini file
command=/usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

[program:nginx]
# NginX will use a custom conf file (ref: Dockerfile)
command=/usr/sbin/nginx
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

Launching a Docker container

I’m not including the instructions on installing Docker in this post (a good place to get started is here)

With the above project set up and Docker installed, the next step is to actually launch a Docker container based on the above image definition.

Frist, the Docker image must be built. In this example, I’ll tag (name) the image as “myapp”. In whatever terminal/shell is available on the machine you’re using (I’m running the Mac terminal), run the following command:

$ docker build -t myapp .

Next, run a container based on the above image using one of the following commands:

# run Docker container in interactive terminal mode - this will print logs to the terminal stdout, hitting command+C (or Ctrl+C etc) will kill the container
$ docker run -ti -p 80:80 myapp

# run Docker container quietly in detached/background mode - the container will need to be killed with the "docker kill" command (see next code block below)
$ docker run -d -p 80:80 myapp

The above commands will direct traffic to port 80 on the host machine to the Docker container’s port 80. The Python app should now be accessible on port 80 on localhost (i.e. open http://localhost/ in a browser on the host machine).

Here are some helpful commands to see what’s going on with the Docker container and perform any required troubleshooting:

# list running Docker containers
$ docker ps


# show logs for a specific container
$ docker logs [container ID]


# connect to a Docker container's bash terminal
$ docker exec -it [container ID] bash


# stop a running container
$ docker kill [container ID]


# remove a container
$ docker rm [container ID]


# get a list of available Docker commands
$ docker --help

Docker Compose

Note: the example included in this section is contained in this GitHub repo: https://github.com/abakonski/docker-compose-flask
As above, the example here is minimal.

The above project is a good start, but it’s a very limited example of what Docker can do. The next step in setting up a microservice infrastructure is through the use of Docker Compose. Typically, most apps will comprise multiple services that interact with each other. Docker Compose is a pretty simple way of orchestrating exactly that. The concept is that you describe the environment in a YAML file (usually named docker-compose.yml) and launch the entire environment with just one or two commands.

This YAML file describes things like:

  • The containers that need to run (i.e. the various services)
  • The various storage mounts and the containers that have access to them – this makes it possible for various services to have shared access to files and folders
  • The various network connections over which containers can communicate with each other
  • Other configuration parameters that will allow containers to work together
version: '3'

services:
  redis:
    image: "redis:alpine"
    ports:
      - "6379:6379"
    networks:
      - mynet

  web:
    build: .
    image: myapp:latest
    ports:
      - "80:80"
    networks:
      - mynet

networks:
  mynet:

The above YAML file defines two Docker images that our containers will be based on, and one network that both containers will be connected to so that they can “talk” to each other.

In this example, the first container will be created based on the public “redis:alpine” image. This is a generic image that runs a Redis server. The “ports” setting is used to open a port on the container and map it to a host port. The syntax for ports is “HOST:CONTAINER”. In this example we forward the host port 6379 to the same port in the container. Lastly, we tell Docker compose to put the Redis container on the “mynet” network, which is defined at the bottom of the file.

The second container defined will be based on a custom local image, namely the one that’s outlined in the first section of this article. The “build” setting here simply tells Docker Compose to build the Dockerfile that is sitting in the same directory as the YAML file (./Dockerfile) and tag that image with the value of “image” – in this case “myapp:latest”. The “web” container is also going to run on the “mynet” network, so it will be able to communicate with the Redis container and the Redis service running within it.

Finally, there is a definition for the “mynet” network at the bottom of the YAML file. This is set up with the default configuration.

This is a very basic setup, just to get a basic example up and running. There is a ton of info on Docker Compose YAML files here.

Once the docker-compose.yml file is ready, build it (in this case only the “web” project will actually be built, as the “redis” image will just be pulled from the public Docker hub repo). Then bring up the containers and network:

# build all respective images
$ docker-compose build

# create containers, network, etc
$ docker-compose up

# as above, but in detached mode
$ docker-compose up -d

Refer to the Docker commands earlier in this article for managing the containers created by Docker Compose. When in doubt, use the “–help” argument, as in:

# general Docker command listing and help
$ docker --help

# Docker network help
$ docker network --help

# Help with specific Docker commands
$ docker <command> --help

# Docker Compose help
$ docker-compose --help

So there you have it – a “Hello World” example of Docker and Docker Compose.

Just remember that this is a starting point. Anyone diving into Docker for the first time will find themselves sifting through the official Docker docs and StackOverflow forums etc, but hopefully this post is a useful intro. Stay tuned for my follow-up posts that will cover deploying containers into Docker Swarm on Azure and then setting up a full pipeline into Docker Swarm using Jenkins and BitBucket.

If you have any feedback, questions or insights, feel free to reach out in the comments.

~ Andrew @ Veryfi.com

About Veryfi

Veryfi is a Y Combinator company (W17 cohort). Located in San Mateo (CA) founded by an Australian, Ernest Semerda, and the 1st Belarusian to go through Y Combinator, Dmitry Birulia.

Veryfi provides mobile-first, HIPAA-compliant bookkeeping software that empowers business owners by automating the tedious parts of accounting through AI and machine learning.

To learn more please visit https://www.veryfi.com