Part 1
Part 2
Here’s the hibernate route that we pasted into handler.ex.
def route(%Conv{ method: "GET", path: "/hibernate/" <> time } = conv) do
time |> String.to_integer |> :timer.sleep
%{ conv | status: 200, resp_body: "Awake!" }
endAnd here’s the kaboom route that we pasted into handler.ex.
def route(%Conv{ method: "GET", path: "/kaboom" } = conv) do
raise "Kaboom!"
endError When Running :observer.start
Depending on your version of Elixir, when running :observer.start you may get the error:
(UndefinedFunctionError) function :observer.start/0 is undefined
To fix this, edit mix.exs to add :observer, :wx, and :runtime_tools to the extra_applications list inside the application function, like so:
def application do
[
extra_applications: [:logger, :eex, :observer, :wx, :runtime_tools]
]
endThen restart iex -S mix.
Two Ways to Spawn
To recap, the spawn/1 function takes a zero-arity anonymous function. For example:
spawn(fn() -> serve(client_socket) end)There’s also a spawn/3 function that takes the module name, the function name (as an atom), and the list of arguments passed to the function. For example:
spawn(Servy.HttpServer, :start, [4000])You may hear these three arguments referred to as MFA (for module, function, arguments).
In either case, spawn creates a process and immediately returns the PID of that process. The process that called spawn does not block; it continues execution. Meanwhile, the spawned process runs its function concurrently, in the background. When that function returns, the spawned process exits normally and the Erlang VM takes care of cleaning up its memory.
Functions Are Closures
In the video we spawned the serve function and passed it the client_socket, which serve uses to read the request and send back the response:
{:ok, client_socket} = :gen_tcp.accept(listen_socket)
spawn(fn -> serve(client_socket) end)Notice that the client_socket variable is bound in the outer scope. Then the spawned anonymous function uses this same variable. This works because functions in Elixir act as closures. When a function is defined, it “closes around” the bindings of variables in the scope in which the function was defined.
For example, when our anonymous function is defined, it remembers the binding of client_socket. In this way, the contents of client_socket are passed from the process that called spawn to the spawned process itself. It’s important to note that data passed from one process to another is always deep copied since processes share no memory.
Inspecting the PID
In the video we used the inspect function to print the PID of the current process, like so:
IO.puts "#{inspect self()}: Working on it!\n"You might expect this to work:
IO.puts "#{self()}: Working on it!\n"However, the PID returned by self isn’t a string; it’s a data structure. And we can’t interpolate that data structure into the string that gets printed to the screen. Instead, we have to first convert the PID data structure’s internal representation into a string, which is exactly what the inspect function does.
Getting System Info
In the video we counted the number of processes by using the Elixir Process module, like so:
iex> Process.list |> Enum.countThe Process.list function returns a list of the PIDS of all currently-running processes, which we then counted.
Here’s another way to do the same thing using the Erlang module’s system_info function:
iex> :erlang.system_info(:process_count)In fact, the system_info function will return all sorts of system-level information depending on the argument you pass it. For example, passing it :process_limit returns the maximum number of processes that can be simultaneously alive, by default:
iex> :erlang.system_info(:process_limit)
262144And that’s just the tip of the iceberg. The amount of information available to you in real time, while the Erlang VM is running, is truly impressive!
Backlog Queue
In the video we said that if a client tries to connect to the server while it’s busy handling a request, then the connection request gets put in a backlog queue. And when that queue fills up with pending connections, new client connections are rejected.
By default, the backlog queue can hold 5 pending connections. However, you can change the queue size by passing an option to the listen function. For example, here’s how to set the maximum size of the backlog queue to 10, along with our other options:
options = [:binary, backlog: 10, packet: :raw, active: false, reuseaddr: true]
{:ok, listen_socket} = :gen_tcp.listen(port, options)We put all the options on a separate line just to make it easier to read.
Exercise: Write a Simple Timer
Write a Timer module that has a remind function taking two arguments: a string representing something you want to be reminded about and the number of seconds in the future when you want to be reminded about that thing. Here’s an example of how to use it:
Timer.remind("Stand Up", 5)
Timer.remind("Sit Down", 10)
Timer.remind("Fight, Fight, Fight", 15)When the timer expires, simply print the reminder.
There’s one gotcha: If you put the Timer module and the example code above in a file (for example timer.ex), and then you run that file, you won’t get any reminders! That’s because the elixir executable exits the Erlang VM after all the code in the file has been executed. So all the reminder processes are killed before their timer has expired.
There are two ways to fix this. One is to sleep indefinitely at the end of the file so that the Erlang VM doesn’t exit:
:timer.sleep(:infinity)Another solution is to tell the elixir executable to not exit the Erlang VM which you can do using the —no-halt option, like so:
elixir --no-halt timer.ex
Because you told the VM to never halt, after getting all the reminders you’ll need to press CTRL-C twice to kill the VM.
defmodule Timer do
def remind(reminder, seconds) do
spawn(fn ->
:timer.sleep(seconds * 1000)
IO.puts reminder
end)
end
endExercise: Super-Mega Spawn
Just how lightweight and fast is it to spawn a single process? We’re talking an initial memory footprint of 1-2 KB and a few microseconds to spawn. You can spawn thousands of processes on a single machine without the Erlang VM breaking a sweat. Go ahead and give this a try in an iex session:
iex> Enum.map(1..10_000, fn(x) -> spawn(fn -> IO.puts x * x end) end)
That spawned 10,000 processes with each process printing the square of the numbers 1 through 10,000. The point isn’t to show how fast Elixir can do math. Rather, it’s to show that Elixir (thanks to the Erlang VM) is highly optimized around the use of processes. When we hear the word “process”, we programmers tend to think of something fairly expensive to create and manage. Part of learning the Elixir/Erlang way is to abandon that mindset and embrace processes!
Exercise: Get Comfortable with Observer
The graphical Observer tool is a great way to see what’s going on inside the Erlang VM and interact with processes. Once you’ve spent some time in the Observer you’ll begin to see your application as being more like an operating system unto itself made up of cooperating processes.
Spend a minute or two just poking around in the Observer GUI and read through the Observer User’s Guide for explanations of the information displayed in each tab.
Exercise: Write an HttpServer Test
Write a test for the HttpServer module. You’ll need to start the server in its own process, connect to it and send a request through a socket, and then verify the response.
You can use the HttpClient module you wrote in the previous section’s exercise to connect to the server and send it a request through a socket. You can find valid request/response pairs in your existing HandlerTest module.
defmodule HttpServerTest do
use ExUnit.Case
alias Servy.HttpServer
alias Servy.HttpClient
test "accepts a request on a socket and sends back a response" do
spawn(HttpServer, :start, [4000])
request = """
GET /wildthings HTTP/1.1\r
Host: example.com\r
User-Agent: ExampleBrowser/1.0\r
Accept: */*\r
\r
"""
response = HttpClient.send_request(request)
assert response == """
HTTP/1.1 200 OK\r
Content-Type: text/html\r
Content-Length: 20\r
\r
Bears, Lions, Tigers
"""
end
endExercise: Transfer Socket Ownership
In the video we sent a /kaboom request to demonstrate what happens when a process dies unexpectedly. The exception gets raised when the Servy.Handler.handle function is called, right here:
def serve(client_socket) do
client_socket
|> read_request
|> Servy.Handler.handle # KABOOM!
|> write_response(client_socket)
endSince an exception is raised, the write_response function doesn’t get called which is problematic because it’s responsible for closing the client socket:
def write_response(response, client_socket) do
:ok = :gen_tcp.send(client_socket, response)
# left out print statements here
:gen_tcp.close(client_socket)
endNow, this isn’t a big deal for our web server. We don’t intended to use it in a production environment where it would run for a long time and potentially leave a bunch of open sockets lying around. But if you were building a real web server then you’d need to be more mindful of how you manage limited socket resources. That being said, we don’t recommend you build a real web server from scratch as a battle-tested web server already exists, among others.
In any event, it’s worth noting that gen_tcp has a handy, built-in solution to closing the socket. When a socket is created, it remembers the process that created it. That process is referred to as the controlling process. And if that controlling process dies, it takes care of closing the socket it created.
OK, so who’s the controlling process of our client_socket? Well, that socket gets created in our accept_loop function when a client connection is accepted:
{:ok, client_socket} = :gen_tcp.accept(listen_socket)As you know, all Elixir code runs in a process. So the process that calls accept is the controlling process.
However, we then hand the client_socket off to the serve function which runs in a new, spawned process:
spawn(fn -> serve(client_socket) end)Here’s the problem: If that spawned process dies, it doesn’t automatically close the client_socket. Why? You guessed it: Because the spawned process is not the controlling process. Not to worry. We can make it the controlling process like so:
pid = spawn(fn -> serve(client_socket) end)
:ok = :gen_tcp.controllinNotice that first we had to bind the PID of the spawned process to the pid variable. Then we call :gen_tcp.controlling_process/2 with two arguments: the socket and the PID of the new controlling process. Now if the spawned process dies, the client_socket will be automatically closed.
If you make that change in your code and then send a /kaboom request, instead of the request hanging (as it did in the video) you’ll see that curl prints “Empty reply from server” and a browser will say something about the response being empty. That makes sense. The request process died, it closed the socket on the server, and the client detects that no response was sent back.
It’s a nifty bit of housekeeping. We left it out of the video because we didn’t want it to distract from learning about processes in general. It’s also very specific to gen_tcp, so unless you end up doing a bunch of socket programming then it’s not something you need to remember.
Code So Far
The code for this section is in the processes directory found within the video-code directory of the code bundle
Why always me?