Phoenix – First Impressions

For the past decade, my primary tool for building web apps has been Ruby on Rails. Using such a toolset helps me rapidly build apps with a predictable structure and the means to manage the database schema.

Of course, there are trade-offs when such things are used, but discussing those is out of scope of this post. Instead, I hope to focus on some positives that they provide.

In my opinion, web frameworks show their worth in the early stages of app development and the mid term when there is some developer churn. It aids both Current Developer (present day maintainers) and Future Developer (eventual project maintainers) by providing some safe assumptions about where things are and the general app flow.

Enter Phoenix…

Recently, I began exploring Elixir and shared my first impressions [1]. Developing web apps in a manner similar to Rails naturally points to the Phoenix Framework, since the core team is influenced by it [2].

Rather than setting expectations for specific things, I jumped in with one thing in mind: hoping to find that I can remain as productive with Phoenix as I can with Rails.

How Did I Learn?

To get familiar with the framework, I followed the Phoenix guides provided on the main site. It hits all of the highlights for typical project management including resource generation, database management, and app testing.

Here are my take aways…

Thumbs Up!

A few of my favorite things…

Schema in the Model

In Active Record with Rails, details about the schema are “hidden knowledge” from the data model; the information is available in a separate database schema file. In Ecto, defining the schema inside the model exposes valuable information for Future Developer, providing the underlying makeup of the data model in line. Awesome!

Data Repo

Rather than executing any activity with the database directly through the data model, Ecto uses separate Repo modules to manage that activity. This is nice since it keeps the model isolated from database connectivity concerns, and instead leaves them focused strictly on data modeling.


Baked in support for channels allows any type of client to subscribe to Phoenix apps for “real time” data. With the BEAM managing large numbers of concurrent connections, the confidence level for a stable solution should be high.


This spec provides a good way to isolate behavior for reuse elsewhere that could be painless to test. Win!


Surprisingly, I encountered one thing that bothered me, but it is a big one.

Node.js Dependency

I was very disappointed to learn that I need Node.js for the default static asset manager, Brunch. To be fair, this dependency is optional, and any build tool can be used. However, from what I have seen, the default tends to be favored in the wild by frameworks.


Do I feel that I can be productive with Phoenix? Yes! I believe that after getting more comfortable with the tooling, and Elixir in general, that the development pace could be maintained.

Now I am eager to put it to work to see it in action!


  1. Elixir – First Impressions
  2. Phoenix is not Rails
Phoenix – First Impressions

Elixir – First Impressions

I have been an Erlang/OTP enthusiast for a few years. Considering that most of my day job is spent writing web apps, the platform’s focus on providing highly available, distributed, fault-tolerant systems using immutable data is very appealing.

But what about Elixir?

Considering that my main language is Ruby, one would think that I am drawn to the familiar syntax. However, I felt comfortable enough in Erlang/OTP that the Elixir syntax felt unnecessary.

Why Now?

Curiosity finally got the better of me! Two things in particular finally convinced me to take a closer look:

  1. Tooling – What is working with Mix like? How easy is it to feel productive?
  2. To the BEAM!Erlang/OTP adoption seems to be hindered by the syntax. If Elixir is more comfortable to a wider audience and it still compiles for use in the BEAM, so be it!

How Did I Learn?

My introduction was by following the the Elixir tutorial [1] provided on the main site. It is a well written guide with many examples introducing newcomers to the syntax, pattern matching, inter-process communication, and OTP libraries. It also uses Mix throughout the guide running code, tests, and generating applications.

The tutorial covers many topics for newcomers, and I encourage you to check it out!

Here are some things that I stood out to me along with some things that I am less enthusiastic about…

Thumbs Up!

While this is not a complete list, these things stood out to me.

Document Testing

This is a powerful feature. It encourages quality examples in the documentation itself while also providing test scenarios, verifying that the examples are sane. I believe that this is a really good way to communicate how a function is expected to be used, leaving the edge cases for unit tests.


The tooling is easy to use and involved in many regular tasks. It drives running tests, code compilation, app loads into Interactive Elixir, app dependency management, and new app generation.

Umbrella Projects

Speaking of app generation, umbrella projects are nice! They can be used to wrap multiple small and focused apps, simplifying the task of grouping multiple apps to form one complete unit.


I am not sure that would list these if I had not already learned Erlang/OTP, but I did (with this [2]), so here we go!

Rebinding of Variables

Erlang/OTP does not permit reusing variable names*, yet Elixir does. To me, this discourages immutable data from staying in mind during development.

* Technically it is possible to reuse variables in an Erlang/OTP shell if you flush out the current process’ knowledge of them, but it is not possible in a running application [3].

Lowercase Variables and Functions

Erlang/OTP forces variables to be uppercase. When reading code, it makes it very easy to identify by casing alone what are variables that store values and what things provide values (functions, records, etc).

Elixir, however, chooses to make modules uppercase, allowing variables and functions to both be lower case. This is familiar to Ruby developers, but the opposite of what Erlang/OTP developers are used to seeing.


I walked away with a good impression of the tooling:

  • Elixir places a priority on documenting and testing code
  • The design of small and focused apps is encouraged
  • Mix makes it easy to bundle them up with umbrella projects

These things help developers build nicely designed Erlang/OTP apps.

Soon, I am going to check out the Phoenix Framework to explore its use developing web apps! [4]


  1. Elixir – Getting Started – Introduction
  2. Learn You Some Erlang for Great Good!
  3. Invariable Variables
  4. Phoenix – First Impressions


Elixir – First Impressions

Converting PC-BSD Isotope to a Rolling Release

Lately I have grown curious about the differences between Linux and Unix. Some friends suggested that I consider something from the BSD family, and I decided to take a closer look at PC-BSD.

Currently I run the testing branch of Debian on my primary workstation, and I have grown accustomed to operating on a rolling release. After learning that PC-BSD recently introduced the rolling release feature [1], I naturally wanted to convert the PC-BSD installation to use it!

Starting Point

Here are some notes about the status of the system before the conversion:

Config File Updates

The first one was not really for the conversion and was instead for a recent system update. Following the advice from this post [4], I made this modification:

cd /etc
sudo cp freebsd-update.conf freebsd-update.conf.orig
sudo vim freebsd-update.conf

# Update IgnorePaths to include /boot/kernel/linker.hints
IgnorePaths /boot/kernel/linker.hints

The second one got the ball rolling for the conversion. Following the main update guide [1] I made this change to the Update Manager config:

cd /usr/local/share/pcbsd/pc-updatemanager/conf
sudo cp sysupdate.conf sysupdate.conf.orig
sudo vim sysupdate.conf

# Change the PATCHSET value to psbsdtest
PATCHSET: pcbsdtest

Preliminary Conversion

Running the Update Manager provided two updates that attempted to convert the system to a rolling release. However, one of them failed due to an invalid hostname in a configuration file that it generated. I updated the file to use a valid mirror:

cd /usr/local/etc
sudo cp pkg.conf pkg.conf.orig
sudo vim pkg.conf

# Change the package site to a working mirror

With a valid host, I was able to proceed with the update:

sudo PERMISSIVE=yes pkg2ng

The final prep work was done using the pkg command. It started by updating the local catalog:

sudo pkg update

Thinking that I was starting the conversion, I ran the next command only to upgrade pkg itself:

sudo pkg upgrade

I ran into trouble after this. The problems that I had are described in this post [2] and this post [3]. The workaround was to remove these packages:

sudo pkg delete -f \
a2ps-a4-4.13b_4 \
linux-f10-libGLU-7.2 \

Finishing the Conversion

It’s the Final Countdown! This step forced all packages to be reinstalled or upgraded and downloaded a high number of files:

sudo pkg upgrade -f

The upgrade did not seem to reinstall any NVIDIA driver, so the last thing that I did was install the newest one:

sudo pkg install nvidia-driver-310.44_1


First Impressions

It took a few tries to get the system fully upgraded and usable. However, keep in mind that the rolling release is still in its infancy, so trouble comes with the territory.

So far, I am enjoying my PC-BSD experience and I am eager to learn more about BSD!


  1. PC-BSD Rolling Release Upgrade Available
  2. Forum: Update failed
  3. Forum: PC-BSD rolling release – nvidia-driver
  4. Forum: Latest FREEBSD security update will not install
Converting PC-BSD Isotope to a Rolling Release

Compact home firewall

In 2010, I began searching for a security solution for my employer. One of the options I found was Endian. After noticing that they have a free community edition, I decided to put a spare eMachine to work on my home network to try it out. Once I set it up and finished configuring it, I concluded that here was no reason to shut it down, and I have left it running ever since.

Why the change?

My wife and I recently discussed plans for the office space that the firewall resides in to double as a guest room. Part of the redesign includes moving all existing networking equipment, including the firewall, to some wall-mounted shelves.

The new room design called for a new firewall, and the replacement had some requirements:

  1. Compact: It needed to be small enough to fit on a wall-mounted shelf.
  2. Quiet: It needed to be quiet enough to run 24/7 without disturbing guests.
  3. Low Heat: It could not generate enough heat to change room temperature.

Pieces to the puzzle

After exploring a few options, I decided to use the following hardware:

Hardware Components

At the time of purchase, the total cost was about $280.

In action

After putting it all together and installing Endian Community, how did it measure up?


The case measures in at 7.3″ x 8.7″ x 2.8″. The footprint is small enough to sit beside the modem and router on a wall-mounted shelf.


The solid state drive has no moving parts, so it does not create any noise. The case comes with an optional fan, but I have not yet needed it. The motherboard only needs the CPU heat-sink to keep it cool, so the firewall is virtually silent.

Low Heat

Computers generate lots of heat when they have high powered processors doing many calculations. The Intel Atom N455 processor draws about 6.5 watts of power, the OCZ SSD pulls in about 2 watts, and the power supply provides a maximum of 60 watts. The low power usage has kept the heat only noticeable by touching the case, and the room temperature consistently matches the rest of the house.

Mission accomplished!

We now see that a mini ITX motherboard using an Intel Atom processor along with a SSD provides a small, quiet, and cool solution. A mid-size tower was turned into a compact and energy efficient home firewall appliance!

A shot of the old firewall next to the new…

Before and After

Compact home firewall

Getting to know Ruby debugger

A key step to debugging any program is replicating the environment to ensure you can consistently produce the bug. In my early Ruby days, to inspect the environment, I used a primitive method: placing puts lines in my code to print values to the console (let’s call them “inspection puts”). It may have looked something like this:

class Buggy
  # assuming perform_operation and special_options exist...
  def buggy_method(param=nil, options={})
    puts "\n\n\nDEBUG"
    puts "param: #{param}"
    puts "options: #{options}"
    @instance_var = perform_operation(
      special_options(param, options)

The case for ruby-debug

This method very easily gives me the information I need, but it has some downsides:

  1. To inspect the return value of special_options I have to add another inspection puts.
  2. Every addition of a new puts requires that I restart the application to inspect the results.
  3. To inspect how special_options and perform_operation are handling the data, I have to add inspection puts inside of them.
  4. I must remember to remove all of the inspection puts before I push the code.

If only there was a better way to do this.

ruby-debug to the rescue! By putting a breakpoint in our code, we have the ability to inspect environment state, check the return value of any methods, and step through our code one line at a time. This is much more versatile than inspection puts because the full environment is available to us. The “I wonder what the value of this is” problem is gone since we can inspect whatever we want. The step functionality the debugger gives us is useful as well, allowing us to step inside of a called method while maintaining the interactive environment.

Setting up ruby-debug

To get set up with the debugger, we’ll need to install the gem:

# Using MRI-1.9.2
gem install ruby-debug19

# Using MRI-1.8.7 or Ruby Enterprise Edition
gem install ruby-debug

ruby-debug in action

Let’s update the example from above using a debugger breakpoint instead of inspection puts:

class Buggy
  # assuming perform_operation and special_options exist...
  def buggy_method(param=nil, options={})
    require 'ruby-debug'; debugger
    @instance_var = perform_operation(
      special_options(param, options)

The next time the method is called, the debugger will stop and give us an interactive shell. We can inspect the values of each variable with the eval command:

eval param
eval options

We can also see what the return value is of the invoked methods:

eval special_options(param, options)
eval perform_operation(special_options(param, options))

We can even use the step command to enter inside of special_options and perform_operation to see what they do.

Here are the various debugger commands that I most commonly use:

  • list, l – show the code for the current breakpoint
  • eval, e – evaluate expression and print the value
  • step, s – next line of code, moving within methods
  • continue, c – continue in the program until the program ends or reaches another breakpoint
  • quit, q – abort the program

Many more commands are available, which can be seen by entering help in the debugger.

Better debugging ftw!

With the ruby-debug gem, we have a better tool for diving in to our code than inspection puts. Using a debugger breakpoint, we can interactively step through our code with less cleanup.

Happy debugging!

Debugging references

Getting to know Ruby debugger