Virtual environments can be kind of cryptic to people who haven't worked with Python for a while. I'd say that even for people that do work with Python, it can take a long time for it to "click". The short version is, you want one virtual environment for every Python project you work on, which, if you work on a lot of smaller projects, can get annoying.
Here's a pattern I've seen in multiple organizations: If someone is stuck with a problem, they guess who might know the solution to that problem and ask them privately, either in person or over chat (e.g. Slack). I always preferred to be asked over chat rather than in-person because it means I can postpone reading it for 5 minutes if I'm in the middle of something, and now with remote working being pretty much mandatory, the in-person option isn't even there any more.
Whether you're building on-premise software or just want to use packages as your atomic deployment mechanism of choice in a traditional bare-metal/VM infrastructure, deb/rpm packages are a nice thing to provide.
Multi-stage builds can help reduce your Docker image sizes in production. This has many benefits: Development dependencies may potentially expose extra security holes in your system (I've yet to see this happen, but why not be cautious if it's easy to be so?), but mostly by reducing image size you make it faster for others to
docker pull it.
Sometimes, you want to run a subprocess with Python and stream/print its output live to the calling process' terminal, and at the same time save the output to a variable. Here's how:
Of all the languages I've worked with, Python is one of the most annoying to work with when it comes to managing dependencies - only Go annoys me more. The industry standard is to keep a strict list of your dependencies (and their dependencies) in a
requirements.txt file. Handily, this can be auto-generated with
pip freeze > requirements.txt.
3 years ago I wrote about Russell, a static site/blog generator I wrote. Since then, I've had a major rewrite of the project to make it easier to extend and configure.
Setting up logging in a sane way in Django has been surprisingly difficult due to some confusing setting names and the annoying way Django's default logging setup looks like. Here I'll go through some simple steps you can take to gain full control of your logging setup, without too many changes to a standard Django setup.
If you're like me, you prefer seting up different SSH keys for personal and professional use. Maybe you even work for multiple organisations at the same time and don't want to risk 1 compromised private key to have a wide-spread effect.
Whether you use Puppet Enterprise or r10k, using a "control repo" with a branch for every environment is the way you want to set up Puppet these days. Finding a way to make this work well with Vagrant for local development was surprisingly difficult - most guides out there focus on a very simple puppet setup with no modules, or maybe assuming that puppet is installed on the host operating system. I wanted to write a bit about the things I discovered while experimenting trying to get a proper setup up and running.
In many cases, you will want to manage your own systemd service definitions. Here's how.
Writing Salt state files can be somewhat deceptive. They have a concept of includes, which allows you to split up state files and define dependencies, which can give you reduced duplication, a cleaner top.sls and a way to run state files individually without dropping all your requirements. However, unlike Python and other programming languages, the includes don't need (it's not even considered best practice) to be defined at the top of the file. Realizing this opens some opportunities.
Sometimes, you want to wait for a service to be running before running other states. Usually this can be done with a
service.running state, which is then required by other states. For example, a
mysql_database.present state can require the mysql service state, and it won't be ran before the mysql service has been started.
Targetting grains is probably the most widespread bad practice in Salt. It helps reduce verbosity and duplication in your top files, but also opens up some serious security holes in the event that a minion should be compromised.
In this post, I'll show you how to effectively override Flask's
url_for function in order to add a timestamp to static asset URLs, as well as setting up Nginx to serve cache busted URLs.
/path/to/virtualenv/bin/my-script to a directory in your
$PATH, such as
SaltStack is an awesome provisioning tool I've been implementing in the past few months. I'd like to share a few pointers to other people working with it for the first time.
I work with public Github repositories a lot, and get super annoyed because I want to push with my SSH key (because I'd rather put in my key's password than my Github username/password), but I want to pull with HTTPS (because then I don't have to put in a username or password). Normally, the way you do this is:
Working with increasingly complex SASS mixins and functions recently, I wanted to set up some sort of test suite to check the CSS output of various files. Rather than bother with some silly NPM package/Ruby gem, I figured I might as well just use some basic shell commands that can be placed in a Makefile (which I already had anyway).
There are a lot of articles revolving around Laravel 4 specifically that try and explain how to abstract logic away from your controller, but most of them either kinda miss the point or overcomplicate things (by using facades, repositories, interfaces...).
There's been some discussion around Laravel and SemVer (semantic versioning) recently, which I appreciate. SemVer is in my eyes a very important guideline for frameworks and widely adopted libraries.
Testing mails in Laravel 4 is a bit of a weak spot. You can say
Mail::shouldReceive('send')->once()... but specifying everything the method should receive in terms of arguments as well as asserting that the closure sets the recipient and subject correctly is tedious at best. This SO answer shows an example of how to unit test a mail being sent.
Over the weekend I had a fun little project - writing a tiny static HTML blog generator.
Mocking classes and defining what methods they should receive is easy.
I love the Laravel 4 FormBuilder (accessible through the
Form:: facade) - automatic re-populating of input from the session and from a model is awesome. In the upcoming versions you will even be able to use accessors to populate form fields from the model even if they're not actually fields in the database.
Repositories have their place in applications that deal with fetching stuff - either if it's from a database or an external source. In the Laravel world the repository pattern has been praised a bit too much for its advantages in terms of testability and architecture. I've written about how you can achieve the same level of testability without repositories, but sometimes repositories really are recommended.
There's a lot of people advocating the repository pattern for testability in your Laravel 4 projects. Fact is, it doesn't make your code that much more testable, and you can easily achieve the same level of testability by using your models as you would a repository.
I had a problem recently where I wanted to use PHPUnit to test what was being passed to a function. We're talking about a several hundred line string with a current timestamp, so simply using
$mock->shouldReceive('method')->with('parameter string') wouldn't work.
Documentation on optimization and performance is somewhat lacking for Laravel 4 at the moment. In this post I'll give some quick pointers as to how Laravel 4 works and how you can improve its performance.
I've had this problem with my old Lenovo ThinkPad with every Linux installation. When I installed CrunchBang (a great disto, by the way) I had it again and had forgotten how to fix it. Amazingly, by googling around I found my own old blog with a post on how to fix it from all the way back to Ubuntu 9.10. I decided to re-write it a bit and host it here.