Recover gigs of storage with this simple hack

I recovered 30 gigs of storage on my MBP (MacBook Pro) with this simple hack of converting my PNGs to JPEGs. Here’s how.

If you own a Mac or an iPhone, you’ll notice both devices save “screenshots” with a PNG extension. Portable Network Graphics (.png), an image file format is uses lossless compression to save your images. You maybe thinking this lossless compression is the best format but the reality is JPEG, a lossy compression, is just as good at retaining image quality at a MASSIVE fraction of the size of a PNG.

Image sizes really matter when your device runs 1080p+ resolutions (most modern Macs) or new OLED iPhones. A full size (no crop) screenshot on my MBP with a resolution of 2080×1080 yields a 7MB (7168kb) image. A JPEG equivalent is 500kb. That’s ~14x smaller. If you are like me and take screenshots as reminders or todos (GTD baby!) then you’ll be chewing through storage fast.

100 of these PNGs and you’ll be reaching the 1Gb territory.

With storage being dirt cheap who cares right? Not so. Laptops with external drives are annoying and iPhones do not have extension cards. Furthermore, data transfer is a burden consider upload speeds are always conveniently ignored yet important if you are backing up to the cloud. And let’s face it, who’s got time to sit around waiting when the same lot of photos with identical quality (assuming you aren’t blowing them up on a wall) can be backed up to your Dropbox cloud storage 14x faster. GTD baby!

Did I mention this will also speed up your Spotlight searching, indexing, extend your SSD life and open those images faster.

Convert PNGs to JPEGS

PNG are uncompressed images from things mainly like Screenshots on your Mac or iPhone. They don’t need to be PNG unless you really are picky about the quality of text sharpness. ie. Under JPEG text becomes a tad more blurry since compression reuses surrounding pixels to make image smaller.

Overall, the chance of you retaining PNGs is low unless you do a lot of photo editing and need that pixel level detail especially for font/text clarity.

[1] Identify Opportunities

Run a scan to identify where the opportunities (PNGs) are located on your drive.

$ find . -type f -iname '*.png' | wc -l

find . -type f finds all files ( -type f ) in this ( . ) directory and in all sub directories, the filenames are then printed to standard out one per line.

This is then piped | into wc (word count) the -l option tells wc to only count lines of its input.

If you only want files directly under this directory and not to search recursively through subdirectories, you could add the -maxdepth flag:

$ find some_directory -maxdepth 1 -type f | wc -l

The key to that case-insensitive search is the use of the -iname option, which is only one character different from the -name option. The -iname option is what makes the search case-insensitive.

[2] Convert

Create JEPG versions of the PNGS and remove old PNGs. There is no need to keep the old PNGs. They are the ones that take up all the space.

Run this on a small subset of your PNGs to make sure you are happy with the resulting JPEG.

$ mogrify -format jpg *.png && rm *.png
$ mogrify -format jpg *.PNG && rm *.PNG

or convert and keep the original PNG

$ mogrify -format jpg *.png

[3] Celebrate

How much space did you recover?

Who really owns your Bose QC35 headphones?

I was excited to finally get my hands on the new Bose QC35 II because noise simply annoys me more than the average bear. The beautiful world we live in today is very noisy, from cars to traffic lights to photocopiers to background chatter, and it’s something some of us have learnt to live with while others suffer from the disruption. I’m in the latter crew. Until that it, a $300 Bose QC35 II became my friends, even if it was for a short period of time.

During this short period of time they were amazing. The ANC (Active Noise Cancelling) was superb! I was so excited I started showcasing (marketing for Bose) to all my software engineers and entrepreneurs, who like me seek silence to do their focused work, how their lives will change. I also sold my wife on these as a solution for plane travel. Noise inside planes reaches 80db, the sound of a vacuum cleaner near your ears on a trip from San Francisco to Sydney has shown over time to damage ear drums.

That is until I upgraded to firmware 4.5.2.

Enter the Firmware

You see, the Bose QC35 II has a computer inside which uses the multiple microphones places strategically to listen to incoming noise and cancel out the sound waves. This is orchestrated by a small onboard computer (think Arduino) running custom Bose software to run and manage the hardware. Hence ACN. Software has bugs. Even production versions. Thus is the nature (complexity) of the beast. And manufacturers will send updates over the internet to patch things up.

I have no idea why I installed the firmware update since the headphones were working flawlessly. I’m sure it was from habit; an expectation of better things to come from an update. Just like when I update my iPhone or MBP I get better performance and maybe few new bells and whistles (features).

Sound Quality Degradation

After the update, the noise cancelling quality of my QC35 II was degraded. I sat there in the library hearing the photocopier and background chatter. Something I could never hear before. WTF! I tried the 2 ACN noise levels (high and low) and both were indistinguishable.

There was something wrong with the v4.5.2 firmware update.

Source: Bose Update RUINS Noise Cancelling??? (TESTED) —

Whether intentional or not, one has to question whether Bose took the $9 an hour engineer outsource route (Boeing is famous for doing so with their 737 MAX MCAS) because something like this surely could not happen if they owned the whole release process and had QA. However the timing of these degrading version updates coincides with the more expensive Bose Noise Cancelling Headphones 700 release. Coincidence or not I’ll leave this to the conspiracy experts to debate.

Next Steps

  1. Downgrade downgrade your Bose QuietComfort 35 II from 4.5.2 to 3.1.8. Yes it’s a tad complex but unfortunately Bose doesn’t support this, nor do they even explain what each version contains, so do this at your own risk.
  2. Send it back to Bose for replacement/repairs; but good luck. The customers who did say the returned units were just as bad.
  3. Leave your views/complaints on the Bose Community website to hopefully make them acknowledge this and fix it for good. Go here:

So who really owns your Bose QC35 headphones?


They are the puppet master here. Controlling at will the quality of the headphones you paid them handsomely for.

Commanding a premium for average quality sound gear with what used to be amazing ACN, then manipulating the quality of their ACN moat through ghost version updates to prop new cheaper build products (*cough* Bose 700) by degrading previous generation units.

If you own the QC35 please let me know how your experience has been so far.

Dockerizing a web app, using Docker Compose for orchestrating multi-container infrastructure (part 1 of 3)


This is a GUEST BLOG POST by Andrew Bakonski

Head of International Engineering @

Andrew heads up International Engineering efforts at Veryfi supporting Veryfi’s Hybrid Infrastructure on AWS & Azure making sure the lights stay on.

Andrew’s LinkedIn:

A couple months ago we decided to move Veryfi’s Python-based web app onto Microsoft Azure. The process was complicated and involved several stages. First I had to Dockerize the app, then move it into a Docker Swarm setup, and finally set up a CI/CD pipeline using Jenkins and BitBucket. Most of this was new to me, so the learning curve was steep. I had limited experience with Python and knew of Docker and Jenkins, but had yet to dive into the deep end. After completing the task, I thought I could share my research and process with the Veryfi community.

I’ve compiled a three-part series that will cover these topics:

  1. Dockerizing a web app, using Docker Compose for orchestrating multi-container infrastructure
  2. Deploying to Docker Swarm on Microsoft Azure
  3. CI/CD using BitBucket, Jenkins, Azure Container Registry

This is the first post in the series.

I won’t go into a full blown explanation of Docker – there are plenty of articles online that answer that question, and a good place to start is here. One brief (and incomplete) description is that Docker creates something similar to Virtual Machines, only that Docker containers run on the host machine’s OS, rather than on a VM. Each Docker container should ideally contain one service and an application can comprise of multiple containers. With this approach, individual containers (services) can be easily swapped out or scaled out, independently of others. For example, our main web app currently runs on 3 instances of the main Python app container, and they all speak to one single Redis container.

Dockerizing an app

Note: the example included in this section can be found in this GitHub repo:
The example here is a minimal, “Hello World” app.

Docker containers are defined by Docker images, which are essentially templates for the environment that a container will run in, as well as the service(s) that will be running within them. A Docker image is defined by a Dockerfile, which outlines what gets installed, how it’s configured etc. This file always first defines the base image that will be used.

Docker images comprise multiple layers. For example, our web app image is based on the “python:3.6” image ( This Python image is based on several layers of images containing various Debian Jessie build dependencies, which are ultimately based on a standard Debian Jessie image. It’s also possible to base a Docker image on “scratch” – an empty image that is the very top-level base image of all other Docker images, which allows for a completely customizable image, from OS to the services and any other software.

In addition to defining the base image, the Dockerfile also defines things like:

  • Environment variables
  • Package/dependency install steps
  • Port configuration
  • Environment set up, including copying application code to the image and any required file system changes
  • A command to start the service that will run for the duration of the Docker container’s life

This is an example Dockerfile:

FROM python:3.6

# Set up environment variables
ENV NGINX_VERSION '1.10.3-1+deb9u1'

# Install dependencies
RUN apt-key adv --keyserver hkp:// --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 \
    && echo "deb stretch main contrib non-free" >> /etc/apt/sources.list \
    && echo "deb-src stretch main contrib non-free" >> /etc/apt/sources.list \
    && apt-get update -y \
    && apt-get install -y -t stretch openssl nginx-extras=${NGINX_VERSION} \
    && apt-get install -y nano supervisor \
    && rm -rf /var/lib/apt/lists/*

# Expose ports

# Forward request and error logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log

# Make NGINX run on the foreground
RUN if ! grep --quiet "daemon off;" /etc/nginx/nginx.conf ; then echo "daemon off;" >> /etc/nginx/nginx.conf; fi;

# Remove default configuration from Nginx
RUN rm -f /etc/nginx/conf.d/default.conf \
    && rm -rf /etc/nginx/sites-available/* \
    && rm -rf /etc/nginx/sites-enabled/*

# Copy the modified Nginx conf
COPY /conf/nginx.conf /etc/nginx/conf.d/

# Custom Supervisord config
COPY /conf/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

# COPY requirements.txt and RUN pip install BEFORE adding the rest of your code, this will cause Docker's caching mechanism
# to prevent re-installinig all of your dependencies when you change a line or two in your app
COPY /app/requirements.txt /home/docker/code/app/
RUN pip3 install -r /home/docker/code/app/requirements.txt

# Copy app code to image
COPY /app /app

# Copy the base uWSGI ini file to enable default dynamic uwsgi process number
COPY /app/uwsgi.ini /etc/uwsgi/
RUN mkdir -p /var/log/uwsgi

CMD ["/usr/bin/supervisord"]

Here’s a cheat sheet of the commands used in the above example:

  • FROM – this appears at the top of all Dockerfiles and defines the image that this new Docker image will be based on. This could be a public image (see or a local, custom image
  • ENV – this command sets environment variables that are available within the context of the Docker container
  • EXPOSE – this opens ports into the Docker container so traffic can be sent into them. These will still need to be listened to from within the container, (i.e. NginX could be configured to listen to port 80). Without this EXPOSE command, no traffic from outside the container will be able to get through on those ports
  • RUN – this command will run shell commands inside the container (when the image is being built)
  • COPY – this copies files from the host machine to the container
  • CMD – this is the command that will execute on container launch and will dictate the life of the container. If it’s a service, such as NginX, the container will continue to run for as long as NginX is up. If it’s a quick command (i.e. “echo ‘Hello world'”), then the container will stop running as soon as the command has executed and exited

The Docker image resulting from the above Dockerfile will be based on the Python 3.6 image and contain NginX and a copy of the app code. The Python dependencies are all listed in requirements.txt and are installed as part of the process. NginX, uWSGI and supervisord are all configured as part of this process as well.

This setup breaks the rule of thumb for the “ideal” way of using Docker, in that one container runs more than one service (i.e. NginX and uWSGI). It was a case-specific decision to keep things simple. Of course, there could be a separate container running just NginX and one running uWSGI, but for the time being, I’ve left the two in one container.

These services are both run and managed with the help of supervisord. Here’s the supervisord config file that ensures NginX and uWSGI are both running:


# Run uWSGI with custom ini file
command=/usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini

# NginX will use a custom conf file (ref: Dockerfile)

Launching a Docker container

I’m not including the instructions on installing Docker in this post (a good place to get started is here)

With the above project set up and Docker installed, the next step is to actually launch a Docker container based on the above image definition.

Frist, the Docker image must be built. In this example, I’ll tag (name) the image as “myapp”. In whatever terminal/shell is available on the machine you’re using (I’m running the Mac terminal), run the following command:

$ docker build -t myapp .

Next, run a container based on the above image using one of the following commands:

# run Docker container in interactive terminal mode - this will print logs to the terminal stdout, hitting command+C (or Ctrl+C etc) will kill the container
$ docker run -ti -p 80:80 myapp

# run Docker container quietly in detached/background mode - the container will need to be killed with the "docker kill" command (see next code block below)
$ docker run -d -p 80:80 myapp

The above commands will direct traffic to port 80 on the host machine to the Docker container’s port 80. The Python app should now be accessible on port 80 on localhost (i.e. open http://localhost/ in a browser on the host machine).

Here are some helpful commands to see what’s going on with the Docker container and perform any required troubleshooting:

# list running Docker containers
$ docker ps

# show logs for a specific container
$ docker logs [container ID]

# connect to a Docker container's bash terminal
$ docker exec -it [container ID] bash

# stop a running container
$ docker kill [container ID]

# remove a container
$ docker rm [container ID]

# get a list of available Docker commands
$ docker --help

Docker Compose

Note: the example included in this section is contained in this GitHub repo:
As above, the example here is minimal.

The above project is a good start, but it’s a very limited example of what Docker can do. The next step in setting up a microservice infrastructure is through the use of Docker Compose. Typically, most apps will comprise multiple services that interact with each other. Docker Compose is a pretty simple way of orchestrating exactly that. The concept is that you describe the environment in a YAML file (usually named docker-compose.yml) and launch the entire environment with just one or two commands.

This YAML file describes things like:

  • The containers that need to run (i.e. the various services)
  • The various storage mounts and the containers that have access to them – this makes it possible for various services to have shared access to files and folders
  • The various network connections over which containers can communicate with each other
  • Other configuration parameters that will allow containers to work together
version: '3'

    image: "redis:alpine"
      - "6379:6379"
      - mynet

    build: .
    image: myapp:latest
      - "80:80"
      - mynet


The above YAML file defines two Docker images that our containers will be based on, and one network that both containers will be connected to so that they can “talk” to each other.

In this example, the first container will be created based on the public “redis:alpine” image. This is a generic image that runs a Redis server. The “ports” setting is used to open a port on the container and map it to a host port. The syntax for ports is “HOST:CONTAINER”. In this example we forward the host port 6379 to the same port in the container. Lastly, we tell Docker compose to put the Redis container on the “mynet” network, which is defined at the bottom of the file.

The second container defined will be based on a custom local image, namely the one that’s outlined in the first section of this article. The “build” setting here simply tells Docker Compose to build the Dockerfile that is sitting in the same directory as the YAML file (./Dockerfile) and tag that image with the value of “image” – in this case “myapp:latest”. The “web” container is also going to run on the “mynet” network, so it will be able to communicate with the Redis container and the Redis service running within it.

Finally, there is a definition for the “mynet” network at the bottom of the YAML file. This is set up with the default configuration.

This is a very basic setup, just to get a basic example up and running. There is a ton of info on Docker Compose YAML files here.

Once the docker-compose.yml file is ready, build it (in this case only the “web” project will actually be built, as the “redis” image will just be pulled from the public Docker hub repo). Then bring up the containers and network:

# build all respective images
$ docker-compose build

# create containers, network, etc
$ docker-compose up

# as above, but in detached mode
$ docker-compose up -d

Refer to the Docker commands earlier in this article for managing the containers created by Docker Compose. When in doubt, use the “–help” argument, as in:

# general Docker command listing and help
$ docker --help

# Docker network help
$ docker network --help

# Help with specific Docker commands
$ docker <command> --help

# Docker Compose help
$ docker-compose --help

So there you have it – a “Hello World” example of Docker and Docker Compose.

Just remember that this is a starting point. Anyone diving into Docker for the first time will find themselves sifting through the official Docker docs and StackOverflow forums etc, but hopefully this post is a useful intro. Stay tuned for my follow-up posts that will cover deploying containers into Docker Swarm on Azure and then setting up a full pipeline into Docker Swarm using Jenkins and BitBucket.

If you have any feedback, questions or insights, feel free to reach out in the comments.

~ Andrew @

About Veryfi

Veryfi is a Y Combinator company (W17 cohort). Located in San Mateo (CA) founded by an Australian, Ernest Semerda, and the 1st Belarusian to go through Y Combinator, Dmitry Birulia.

Veryfi provides mobile-first, HIPAA-compliant bookkeeping software that empowers business owners by automating the tedious parts of accounting through AI and machine learning.

To learn more please visit

Linux Server Security Checklist

Had this sitting around in my Google Docs for some time. Good idea to share these Linux security tips to help others secure their boxes. So here it is peeps.

Linux security – paranoid check-list

  1. For direct access to your box, only use ssh. SSH is the most secure standard for both authentication (both host and user) and data protection (everything strongly encrypted, end-to-end).
  2. Enable key-pairs as the only way to access your box. Don’t allow passworded logins. Most passwords are too short and sit (even if in hashed form) on many databases: your bank, your favorite retailer etc. My guide on SSH setup will guide you through this by setting in sshd_config.
    PasswordAuthentication no
  3. Run ssh on a high port. The reason is that a lot of security scanners will only scan the standard known-service ports or the lower range (1-1024 are privileged ports that only superuser can bind/listen to, so they are more attractive to hackers) So running on 43256 (there are 2^16 =~ 65k ports) is much safer.
  4. In the firewall rules, limit access to your (and your customers) IP blocks, i.e. instead of (all the internet) allow only from (say) (specific block)
  5. Control the users who are allowed entry to your server.
    sudo nano /etc/ssh/sshd_config
    AllowUsers username1 username2
  6. Never ever permit root logins:
    sudo nano /etc/ssh/sshd_config
    PermitRootLogin no
  7. All administrative stuff is done as a known user (accountability) which used ‘sudo’ after you have authenticated in via SSH.
  8. Use a second layer firewall (software firewall) in case the first goes down. On Linux you can use iptables with Gufw, one of the easiest firewall in the world, to manage the iptables.
    sudo apt-get install gufw
  9. Run logcheck, a periodic system log scanning that will email you any unusual event. logcheck comes with a very large rule-set of what can be safely ignored so it only emails when something really new and different shows up in the logs.
    sudo apt-get install logcheck
    sudo nano /etc/logcheck/logcheck.conf
    # Add your email to SENDMAILTO
    sudo -u logcheck logcheck # run a test
  10. Run tripwire, a service that scans all the executables on the system, and alerts when a signature has changed (i.e. the file has been replaced). There is also a good post here on Setting up Tripwire in Ubuntu 11.10 – Intrusion Detection System.
    sudo apt-get install tripwire

And that’s a wrap! Are there any others you would recommend?

~ Ernest

How to: SSH secure key authentication on Ubuntu

Open SSH is the most widely used SSH server on Linux. Using SSH, one can connect to a remote host and gain a shell access on it in a secure manner as all traffic is encrypted.

A neat feature of open SSH is to authenticate a user using a public/private key pair to log into the remote host. By doing so, you won’t be prompted for the remote user’s password when gaining access to a protected server. Of course you have to hold the key for this to work. By using key based authentication and by disabling the standard user/password authentication, we reduce the risk of having someone gaining access to our machine/s. For more info on data access management, visit sites like And if you need comprehensive visibility to enable compliance and secure data sharing, you might want to read more here to learn more. Moreover, if you need Cyber Security Solutions in charge of data protection of your company, you may look for a time-limited privileged access management system that evaluates each access request. You may click here to find out more.

Implement NIST Cybersecurity Framework in 3 weeks using CyberArrow. CyberArrow is a technology first solution that automates the evidence collection for NIST CSF controls. CyberArrow can be used by any type of organization.

So if you are not using SSH with public/private key pair, here is how to get this rolling. If you are using AWS (Amazon Web Services) you would have been forced to use this method. This is great! The instructions below will teach you a bit about this and provide insight into setting this up on your dev VM or a server which doesn’t have this level of security turned on.

Useful commands to note

Accessing server using key



ssh -i ./Security/aws/myname_rsa root@ -p 22345

Restart SSH server

sudo /etc/init.d/ssh restart

Install & Setup SSH Security Access

Note: This section is for admins only.

On your Server (remote host) Locally on your box
1. Install SSHOnly if not already installed.
sudo apt-get install openssh-server
sudo apt-get install openssh-client

Make sure you change your server (and firewall is present) it to listen on port 22345 (or similar port of your liking in the high range) vs the standard unsecure 22.

Via Shell

sudo nano /etc/ssh/sshd_config
sudo /etc/init.d/ssh restart


In Webmin >SSH Server > Networking > Listen on port = 22345

How to install Webmin instructions are here:

On your Server (remote host) Locally on your box
2. Create a public/private key pair.
ssh-keygen -t rsa

This will generate the keys using a RSA authentication identity of the user. Why RSA instead of DSA? RSA is 2048 bit key vs DSA 1024 bit key restricted. Read here:

By default the public key is saved in the file:~/.ssh/,
while private key is:~/.ssh/id_rsaeg.

3. Copy the generated file to the remote host. Use SFTP and from:
/Users/name/.ssh/ drop it into remote host path:
/root/.ssh/myname_rsa.pubNote: If that folder doesn’t exist then create it.
sudo mkdir /root/.ssh/
On your Server (remote host) Locally on your box
4. SSH into remote host and append it to ~/.ssh/authorized_keys by entering:
cat /root/.ssh/ >> ~/.ssh/authorized_keys
rm /root/.ssh/
4.1. Check the permissions on the authorized_keys file.Only the authenticated user should have read and write permissions. If the permissions are not correct change them by:
chmod 600 ~/.ssh/authorized_keys
5. Enable SSH public/private key pair access.
sudo nano /etc/ssh/sshd_config

Make sure you have the following:RSAAuthentication yesPubkeyAuthentication yesSave when exiting.

6. Reload new configuration.
/etc/init.d/ssh reload (or)
service ssh reload
On your Server (remote host) Locally on your box
7. Protect your private key file.Locally on your machine assuming you moved the private key file to folder ./Security/
chmod 0600 ./Security/myname_rsa
8. Test your new setup.Login to your remote host from your machine:

where ./Security/KEYFILE is the location of your private key

ssh -i ./Security/myname_rsa root@ -p 22345

You should be granted access immediately without password requirements.

On your Server (remote host) Locally on your box
9. Disable authentication by password.
sudo nano /etc/ssh/sshd_config

Make sure you have the following:

ChallengeResponseAuthentication no 
PasswordAuthentication no
UsePAM no

Save when exiting.

10. Reload new configuration.
/etc/init.d/ssh reload (or)
service ssh reload
On your Server (remote host) Locally on your box
11. Test #2 your new setupLogin to your remote host from your machine:

where ./Security/KEYFILE is the location of your private key

ssh -i ./Security/myname_rsa root@ -p 22345

You should be granted access immediately without password requirements.Also test using the old method which should prohibit access.

ssh root@ -p 22345

Should yield: Permission denied (publickey).
Server is now protected against brute-force attacks.

Finally make sure you adjust your development tools so they tool can gain access to your secured server.


Your choice of tools my vary but the process is very similar. The following are my most used tools and how to tweak them to allow SSH key entry to my secured server.

FileZilla – SFTP

To enable FileZilla to access the server under the new configuration do this:

  1. FileZilla > Preferences…
  2. Settings window opens. Select “Connection > SFTP” (left hand navigation).
  3. In the right pane, click on “Add keyfile…”. Navigate to your private keyfile and click on it to add.
  4. You may be asked by FileZilla to “Convert keyfile” to a supported FileZilla format. This is fine and just click “Yes”. Save the output file to the same location as your private key file.
  5. Click OK on the Settings file to save final changes.

SublimeText2 – IDE

To enable SublimeText2 to access the server under the new configuration do this.

In your solutions sftp-settings.json configuration file enable key file access like this:

"ssh_key_file": "~/.ssh/id_rsa",


"ssh_key_file": "~/Security/myname_rsa",

And that’s it. Happy development!

~ Ernest

Outsourcing software development: pros and cons

Outsourcing part of software engineering is not for everyone. Outsourcing requires a lot of micromanagement and software engineering background to make sure that what you ask for is what you get.

What follows is my own experience over the last 10 years in many outsourcing contracts working across India, China and Eastern Europe outsources both independent and agencies.

Are you sure it’s for you?

Never “palm off” the job in the form of outsourcing. Otherwise you will be heading down a spiral. Because the important piece of outsourcing is both micromanaging and understanding what the fuck is getting delivered. This way you can either pull the plug on crappy code or influence the right sort of implementation.

If you outsource too early or the core IP you lose the power to radically change the design of your product. Early design is constantly changing especially if you are building something which has never been done before. You want the flexibility to change fast. You need to be under control and know what is going on with all the moving pieces. Read more on this how bad outsourcing impacted Boeing’s Dreamliners (787’s).

This leads me to some key points on what skills you should have if you are going to outsource. Mind you I said “you” because it cannot be someone else you palm it off to.

1. Have a strong background in software engineering.

Loose coupling, Less code, Don’t repeat yourself (DRY), explicit is better than implicit, Test-driven development (TDD), Distributed Version Control System (DVCS), and what .Net develops is all important. Did you understand any of those? If not then you are going to get a piece of crap code. Why is code important? Because it determines the type of engineering culture you build out internally & future maintenance (this is where the hard costs nail you down) and local hiring – quiet frankly great engineers do not like working in a pile of mess.

If you do not know how to code move on or go and learn to code. Anyone with the right attitude and time today can learn to code. See,,, etc… plenty of resources online for free. No excuses.

If the outsources delivers crap code you tell them to fix it. If they continue to deliver crap code. You break the contract and provide constructive feedback to them.

Detail detail detail. “The devil is in the detail.” my previous biz partner stressed this to a point where it is now embedded into my psyche and into how I work.

If you are outsourcing make sure that you or the person working 1:1 with the outsourcer are very detail orientated. This way errors are caught fast and stopped at the front line, and where appropriate move fast and fire the outsourcer.

2. People skills

If you have a background working with people (we all do right) and managing those people (oh here we go) then this part will also get smoother. You need to understand you are working with people who have their own lives, family, goals and ambitions etc… so don’t be an ass because you outsourced a piece of work to a “cheaper” labor country.

If it helps, review (even if you have already read it) How to Win Friends and Influence People by Dale Carnegie. The 3 basic principles:

  • Don’t criticize, condemn, or complain.
  • Give honest and sincere appreciation.
  • Arouse in the other person an eager want.

Look, you are going to have to micromanage them. Yes micromanagement ain’t ideal for your immediate employees but for contractors it is a must. They are paid to do a certain job and usually move on. You need to receive quality (refer to point 1 on engineering) and also make sure commitments are completed on time and within budget. Hence the micromanagement.

I also like to emphasize to build a good relationship so you can work with them again. Obviously pending the results of your encounter. Results is all that matter at the end of the day. But, never lose sight of maintaining that level of expected quality. If it drops, give them a chance to correct it by providing constructive feedback. If nothing changes again, then cut the tie immediately.

Remember: “Once shame on you, twice shame on me” (in 1st person)

Right so you have the necessary skills to get moving. Here is where the harder stuff begins.

The checklist!

1. Automate.

As much as you can. Outsourcing isn’t just relationship management. There are a number of balls in the air from managing the relationship to code review & feedback to product questions that need to be answered and/or fleshed out.

Use DVCS (ref my previous blog post) with email alerts enabled for code checkins, comments and issue tracking. Have everyone involved with the job on email alerts so you know when code is checked in or issues logged. I like using Bitbucket for all of this.

I also recommend you put them on HipChat for Private group chat and IM, business and team collaboration. This way you will maintain all communication in the one place.

2. The standards list.

Send the contractor your “standards list” of what you expect out of the engagement. Use Google Apps to write one up & share it if you do not have now. Put a line in the sand. A bar in front on:

  • Expected quality – DRY baby!,
  • Naming conventions,
  • Daily status updates – email or via HipChat,
  • Use of standard industry engineer practices like TDD else you will get code without unit tests!!
  • How everyone can reach each other for questions on product spec or similar ie. Skype, emails, cell #, HipChat etc. Include timezones everyone is working on.

3. Requirements.

Fuck sake man. More detail. Stipulate any API calls, use cases, designs, standards as mentioned above etc.. If you have an engineering background you will appreciate and say “fuck yeah” to what I just said.

No one likes to document things but this small initial investment will weigh in its worth when the final product is delivered to spec. Do not leave anything for misinterpretation.

  • Have a Balsamiq design illustrating all the screens you expect and how they should look.
  • Where applicable provide designs for every screen. Do not let the contractor try to work out for themselves what you want. Never ends well and you get billed for that time.
  • Technical detail around API calls (request & response) with examples, use cases, high levee flow diagram etc..

4. Understand it before you open your mouth.

If you are developing for a channel you have no experience in, ie. Android. Then spend time learning it from at least a “high level” understanding so you can speak the lingo and know when you are getting lied to in the face. If you level out with the lingo then you will get respected more and the contractor will not be able to pull a “shifty” on you.

5. Hiring.

Never straight forward and always requires a ton of work. But this pays off when you have the right contractor on board working with you.

  • Spend time writing up a detailed job spec and list it on oDesk/eLance and wait for the flood of offers. Immediately decline those that have not met all 5 stars criteria.
  • Setup a spreadsheet of all those that applied to keep track of who you short list, their contact details, your last communication with them etc… From the 100 narrow it down to top 20.
  • Interview the top 20 via Skype video (yes you need to see them) and listen for something that will differentiate one from the rest. For me it was getting asked questions I did not have an immediate answer to. Smart switched on engineers are like that and you know you got a winner there.

Remember that at every point in the interview/communication you need to be prepared with a series of questions so you can use those as a base for quality and comparison.

Tip: And when you do engage the outsourcer make sure you stay working via oDesk or similar tool. As much as you may be conned into believing working outside oDesk is worth 10% discount it isn’t  oDesk provides great tools to track your contractors time (with videos) and in the end you get to provide feedback on them. Bad business means bad comments means no future business. So it is in everyone’s favor to be on best terms and get the job done right.

6. Have fun!

Not a long-term strategy

Outsourcing is great when you first kick off a startup and need to fill in skill or time restraint gaps like kicking off a new channel which will interface with your in-house platform (your IP – which you built and are evolving) or design work. But that is where it stops.

Remember that outsourcing is work for hire. Your own company / startup is a labor of love which only you and those that live and breathe it each day share in the office. So if you have high expectations of the outsourcer to care and be on the ball with something they are building or have built then you most likely skipped the crucial part. The part where I told you to own the whole process and be laser focused on the work getting outsourced. You fucked up. You’re at fault not them.

Never outsource your core business. Only channels. Those that are not what I call IP (intellectual property). Your IP always stays in-house managed by you and your cofounder.. and ultimately a kickass in-house team. For example; a business that’s attractive to investors typically has some sort of IP that’s hard to clone by competitors. That thing that makes it unique. It could be a unique algorithm or even data. You’d never outsource that. Stuff that can be outsourced might be a channel eg. a mobile app as long as the IP (say that algorithm) is in the API your local team manages. For a smoother system consider using SD-WAN software as it gives you a better application system and more efficient business operations.

Final note

You are not looking for a “sweat shop”. Find rock stars! That have a history of delivering quality code on time while communicating effectively. Communication decides if you get an apple or an orange when all you wanted is an apple.

If you have any stories (good or bad) please share with me them below in the comments.

Happy outsourcing!
~ Ernest

PHP Coding Horrors and Excuses for Poor Decisions

Having coded in PHP for 7 years I feel I can give a balanced feedback on PHP. Today I mainly focus on Python & .NET because these languages have stood the test of time and allow me to attract great talent. I find it amusing that engineering leaders in established companies make backward decisions today to use PHP to power their business/core sites. Not to mention software engineer newbies falling prey to using it as their 1st language to experience software development & put theory into practice. So let’s explore this in more detail.

A quick story

Few years back while attending a Python class a young chap put up his hand, introduced himself as a long time PHP developer and asked the lecturer a question. “What is the difference between Python’s dictionary & lists to PHP’s arrays.”. Bang. This is exactly why I do not want newbies to go down that route. Data structures are fundamental to any software design. PHP will NOT force you to think about data structures when coding.. instead just stick a boot in your face and say walk.

As a leader

As a smart fast paced technology leader, you should NOT be suggesting or advising PHP as the company’s “language of choice”. If a company is using optimized wordpress hosting it’s typically for its blog (yes WordPress rocks), due to legacy reasons (we all learn right) or a variant of it. PHP is not even a great presentation language (so famous for years ago) lacking good support for a real templating engine. Going LAMP stack, as in Linux stack, is not about moving to PHP. Matter of fact LAMP stack is an old, beaten, used & abused lingo which means little today with the range of open source stacks that run on the Linux OS.

Let’s first look at what makes a good language. And if you are a leader looking at starting or moving to a new language this post should be enough to tell you what to avoid. Learn from other’s mistakes so you don’t have to make them yourself.

What makes a good language

  • Predictable
  • Consistent
  • Concise
  • Reliable
  • Debuggable

Check out the philosophies behind Python in Zen of Python on what a good language encourages.

PHP fails miserably here.

  • PHP is full of surprises: mysql_real_escape_string, E_ALL
  • PHP is inconsistent: strpos, str_rot13
  • PHP requires boilerplate: error-checking around C API calls, ===
  • PHP is flaky: ==, foreach ($foo as &$bar)
  • PHP is opaque: no stack traces by default or for fatals, complex error reporting.

PHP is NOT an enterprise language

An enterprise language is one that has good corporate support. Best example is Microsoft and their .NET platform.

Look at the support behind the PHP language. No corporation supports PHP’s growth & maturity like Sun & Google do for Java, Google (Guido van Rossum) for Python (jnc Django framework), Ruby (inc RoR) by 37 signals etc…

PHP is not supported by Yahoo. They failed to launch a version with Unicode support in the hyped up PHP6. And the father of PHP Rasmus Lerdorf is no longer based at Yahoo. Nor is PHP supported by Facebook. Facebook has been trying hard to move away from it’s aged roots and now compile PHP into C via HipHop – more on that below.

The mess that is PHP

There are plenty of websites covering the mess that is PHP. Just go and read them if you are still doubtful.

Some of those nasty PHP horrors

  • Unsatisfactory and inconsistent documentation at
  • PHP is exceptionally slow unless you install a bytecode cache such as APC or eAccelerator, or use FastCGI. Otherwise, it compiles the script on each request. It’s the reason Facebook invented HipHop (PHP compiler) to increase speed by around 80% and offer a just-in-time (JIT) compilation engine.
  • Unicode: Support for international characters (mbstring and iconv modules) is a hackish add-on and may or may not be installed. An afterthought.
  • Arrays and hashes treated as the same type. Ref my short story above.
  • No closures or first-class functions, until PHP 5.3. No functional constructs. such as collect, find, each, grep, inject. No macros (but complaining about that is like the starving demanding caviar.)  Iterators are present but inconsistently used.  No decorators, generators or list comprehension.
  • The fact that == doesn’t always work as you’d expect, so they invented a triple-equals === operator that tests for true equality.
  • include() can generate circular references and yield many unwanted and hard to debug problems. Not to mention its abuse to execute code that gets included.
  • Designed to be run in the context of Apache. Any back-end scripts have to be written in a different language. Long-running background process in PHP have to overwrite the global php ini.
  • PHP lacks standards and conventions.
  • There’s no standard for processing background tasks, such as Python’s Celery.

PHP presents 4 challenges for Facebook.

  • High CPU utilization.
  • High memory usage.
  • Difficult to use PHP logic in other systems.
  • Extensions are hard to write for most PHP developers.

Dont use Facebook as an excuse to have PHP as your core language.

Excuses for poor decision to use PHP

“But Facebook is all PHP.”

Boo hoo. Is that what your decision was based on? Seriously? It is well documented that Facebook uses PHP due to legacy reasons. It is what Mark Zuckerberg used in his dorm nearly a decade ago and somehow it stuck around. Later a top FB engineer called Haiping Zhao released HipHop literally rewriting the entire PHP language thus avoiding the worst attributes of the language. Since 2007 alone, Haiping named four failed attempts to move to Python (twice), to Java, to C++. The reason this did not work is due to incumbent inertia (it’s what’s there).

So you see it is not the same PHP you are coding in but a far superior subset of it customized for Facebook process & development efforts. PHP at Facebook was a mistake that had been corrected to some degree. Today the preferred strategy at Facebook is to write new components in a de-coupled manner using a better language of choice (C++, python, Erlang, Java, etc); this is easily facilitated by Facebook’s early development of thrift, an efficient multi-language RPC framework.

“But Yahoo is all PHP.”

Seriously? Shall we even go into this. A sinking Titanic that started its life as a manually maintained directory site. Today’s online apps are more advanced, demand high concurrency and dynamic nature – something more advanced languages are capable of delivering.

 “But Zynga (a large gaming company) uses PHP.”

At the time Zynga started developing for the platform, there was no other official Facebook SDK available except for the PHP one. Naturally Zynga started its life on Facebook. The rest is history.

Looking for a better language? Guess! ~ Yes I drew that by hand 🙂 Hope you like it!

Technology breeds culture

Bring a bunch of core PHP developers (those that only know this language) on board and you get what you pay for. Someone that can hack a script and not really understand the fundamentals of software design & engineering.

Think about this. Your valued assets are the staff (people in your company). And the staff will naturally come from companies and/or backgrounds/experiences will align with the technology decisions you made.

How about rewriting your code base in another language?

There is also a lot of industry precedent (Netscape case or Startup Suicide) indicating that re-writing an entire codebase in another language is usually one of the worst things you can do. Either don’t make the mistake to go down the PHP route in today’s era or start thinking about introducing a new language into the stack for new projects. Having a hybrid setup is OK and actually allows you to iterate fast, gives something new to play for your engineering crew and should you ever need to switch stacks you are already half way there. Dont make the same mistakes Facebook did.

The only bits I like in PHP are its “save file, refresh page and there are your changes”. The language is “easy to use”, yes. It’s hard to figure out what the fuck it’s doing, though.

Happy coding!

~ Ernest

Migrating from PC to a Mac

This post is inspired by my wife, Urszula. Urszula’s Windows PC I nearly threw out the window.. if it wasnt for the data on it. But I did replace it with a MacBook Air! Win!

The new Mac

Urszula’s Windows PC (pictured above) was super loud humming and buzzing even when idle. It was also slow (time has shown its face), felt cheap (plastic build) & looked ugly (the case had line cracks) and has been blue screening every 2nd day. What a mess lol… This is what Urszula is now sporting! A sexy fast compact MacBook Air. I do not degrade windows, I know there are a lot of better and cheaper windows laptop compare to MacBook. I just find it really hard to find seller during that time.

Slick and sexy MacBook Air

It just flys! SSD drive, 8gig RAM and an i7 core CPU

I take full responsibility for the PC. People form habits and it takes a lot of work to change. But it is possible. I made my change to a MacBook Pro 1 year ago after getting fed up with my “fast” Windows PC. Before the Mac I even switched to a Kubuntu OS on the PC which to be frank ended up a failure. Too unstable for a development machine. Then I took a deep dive and got a Mac. It was well worth the initial pain of learning new way of doing things. Now m

Without further ado.. here are the reasons you should switch to a mac and some tips & tricks once you do. Enjoy!

The reasons

Reasons TO switch to a Mac – the pros

  • Mac’s crash less than a PC – no more blue screen of death. So you get more done and are less frustrated. This is especially a common signal for power users. The Unix Kernel of the Mac boots up faster and runs more smoothly overall. Especially when running several tasks at once. 1 crashed app won’t take down the whole OS. It’s a pleasure to be working on a Mac while being more productive.
  • You spend hours a day staring at a street. A Mac is not only pleasing to the eye in its design (shell) but all the applications are smooth, consistent and clean. The MacOS is the world’s most powerful and attractive operating system (eye candy) dictating how using the computer “feels”. Ref Kubuntu. Yes I tried Ubuntu, Kubuntu and all flavors of Windows. Mac wins hands down.
  • Stops new users from getting into a bad habit using the worst internet browser out there – Internet Explorer. Your choices are everything but Microsoft’s Internet Explorer. I prefer to use the fast and light Chrome from Google.
  • Always on. Mac’s are fast to go to sleep and especially to wake up. Just close the lid when done using it and open when you need it.
  • Easy installation of software. On my PC I had to find the .exe install file and run it with a million other levers to pull to make it work. On my Mac I double-click I package file, pop opens a window with the app icon which it asks me to drag to the right into the apps folder. Done.
  • No need to mess around the internet looking for free apps and hoping they are not virus or trojan infected. Apple’s iTunes / Mac Store is my 1 place shop for finding, researching, installing & upgrading apps.
  • Speaking of security. The Mac comes with an inbuilt firewall and a number of safeguard. Similar to security of Linux being based on a Unix Kernel. The list of viruses designed to wreak havoc on the PC dramatically outnumber the Mac. You can invest in additional Cybersecurity Solutions if you’re working on important documents.
  • Windows don’t do Macs—but Macs do Windows. You can read NTFS (Windows) drives using NTFS for Mac OS X and even run Windows in a virtual environment on your mac without rebooting via VMWare Fusion.
  • No more messy wireless internet connectivity settings to get a PC working. On a Mac select an available wireless network and enter your password (if any) and your online. Simple.
  • The OS mirrors that of the iPad and iPhone so user experience is consistent. Furthermore iCloud allows you to share content between all your Apple devices including SMS messaging. And AirDrop makes sharing files between Macs super easy without a network.

Then navigating the modern legal landscape also requires more than just basic knowledge; it requires a partner who can guide you through the complexities of your specific situation. Whether dealing with contracts, litigation, or regulatory compliance, the right legal advice is critical. It’s not just about defending against accusations; it’s about proactive protection and understanding. For those in search of such guidance, more information is available by visiting visit this website.

Reasons to NOT switch to a Mac – the cons

  • Learning curve. Oh no you have to learn how to do things differently. Ok it’s a bit of a pain in the bum at first especially that Command button but after you get familiar you wonder why you didn’t do it sooner.

Mac Book Air vs Pro vs PC – The Pro is to the right and PC to the left. Notice the thickness diffs?


So you got your Mac and want to get setup. Here’s a few tips to make this process fast and pain-free.

Recommended software


  • NTFS for Mac OS X (paid) to read Windows formatted drives (think portal drives).
  • VMWare Fusion (paid) to run Windows programs on you Mac without rebooting. It even integrated into your Mac OS X should you want a smoother transition or miss that old OS.
  • Chrome Internet Browser (free) – the best browser out there, hands down
  • Microsoft Office (paid)
  • Dropbox (freemium) – store your stuff in the cloud. You start off with few free gigs of free space. The client runs quietly syncing your files. It is the most reliable cloud storage sync app on the market. It’s how software should be.
  • Evernote (freemium) – your notes in the cloud. Evernote is a freemium app designed for note taking, organizing, and archiving across all devices.
  • f.lux (free) – it makes the color of your computer’s display adapt to the time of day, warm at night and like sunlight during the day. This helps ease eye strain and sleepless nights from the blue back light of a LCD.
  • CleanMyMac (paid) – Clean, optimize, and maintain your Mac with the all-new CleanMyMac 3. It scans every inch of your system, removes gigabytes of junk in just two clicks, and monitors the health of your Mac.
  • Little Snitch (freemium) – a firewall protects your computer against unwanted guests from the Internet. But who protects your private data from being sent out? Little Snitch does. It protects your privacy.
  • Slack – a messaging app for 1+. Most people use this in teams but you can just as easily use it at home especially if you have IoT connected sending messaging directly to Slack. It’s brilliant.


  • Home-brew (free) – package manager for OS X. Homebrew installs the stuff you need that Apple didn’t. Think of apt-get for your Mac.
  • Sublime Text (freemium) – your Notepad for Mac. Its also a sophisticated text editor for code, markup and prose. You’ll love the slick user interface, extraordinary features and amazing performance.
  • Balsamic (paid) – a wireframing and mock up tool with a high focus on usability. Quickly come up with mock ups and easily share them with your clients.
  • PyCharm (paid) – the Most Intelligent Python IDE hands down. Got something better? Let me know.

Also check this out; a curated list of awesome applications, softwares, tools and shiny things for OS X:

Most apps which you enjoyed on your PC will have equivalent Mac versions. If not just use VMWare Fusion to run Windows on your Mac and the PC apps inside that. Contact me if you need help getting this up and running.

Performance tuning

If you do not want your Mac’s performance to drop over time I recommend turning off non-essential Spotlight search results. Spotlight is the standard search on the mac located top right hand corner. It indexes EVERYTHING you do & surf on the mac. Over time this index grows too big and adding / searching it slows the Mac down.

Under System Preferences > Spotlight > Search Results
disable the non-essential categories like “Messages & Chats”, “Webpages” & “Developer”.

Useful knowledge for exPats of Windows

GUI Shortcuts

The following are nowhere to be found on the Mac laptop. Only full keyboards. Use these commands to achieve the same effect:

Page Up/Down fn + up/down arrow
Backspace fn + delete
Lock your mac shift + ctrl + eject
Force app to close command, option + esc
Get out of full view command, control + option +f

Terminal commands

The mac has a terminal like Linus flavors. This is great for running some powerful functions vs the GUI that can sometimes be slower.

Find in trash find /Users/ernest/.Trash -name *.jpg
Force Safari to always open new tabs vs new window defaults write TargetedClicksCreateTabs -bool true

That should be enough to get your rolling in full swing. As always feel free to contact me if you need help or further information. If you found this post useful please share it around on Twitter or Hacker News.

~ Ernest

How to Change DHCP to a Static IP Address in Ubuntu Server

If you are working via a VM environment you may no doubt have came across the issue of changing IP address. It really messes up your workflow since now all the saved bookmarks, host files and other associations need to be changed to the new IP address. The solution is to disable DHCP and enable a static IP address. But in such a way which still gives you access to the outside world (your machine) and the world outside that (the internet) so you can run updates and drag code from your repositories.

Here’s a bunch of steps, copy and paste style, which you can run in Ubuntu shell to make this happen.

Do this in your Ubuntu Shell

1. Get your current address, net mask and broadcast.


Under “eth0” (see below) write the numbers down. We will use them to lock in your static ip address.

eth0 Link encap:Ethernet HWaddr 00:0c:29:fe:45:3a
 inet addr: Bcast: Mask:

2. Get your gateway and network.

route -n
Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref Use Iface UG 100 0 0 eth0 U 0 0 0 eth0

UG = U is for route is up and G for gateway. The other IP address is your network IP.

3. Change your IP address.

3.1 Open interfaces in nano

sudo nano /etc/network/interfaces

3.2 You will see something like this:

# The primary network interface
   auto eth0
   iface eth0 inet dhcp 

3.3 Replace it with:

auto eth0
   iface eth0 inet static

Note: These number are based on my VM and only serve here as an illustration.

4. Make sure your name server ip address is your gateway ip address.

sudo nano /etc/resolv.conf

5. Remove the dhcp client so it can no longer assign dynamic ip addresses.

sudo apt-get remove dhcp-client

6. Restart networking components.

sudo /etc/init.d/networking restart

7. Confirm you can access the outside world.


Note: ctrl+c to break out of the ping.

8. Enjoy!

And that is it. Simple but an effective measure to productivity.

~ Ernest