I'm rewriting and repurposing things, so don't worry if you find weird or missing content.

This is my note-to-self, but I'll try to keep it clear and open enough that others can appreciate it too.

## Contact

Send me something to read: contact@segfaultsourcery.com

Look at some of my projects: @segfaultsourcery

Work with me: CV

Thanks :)

# Python

### Functional recipes

Here I'm going to be putting tools and functions that I find myself needing over and over.

To use these functions, you probably want to first import this:

from functools import partial, reduce


#### Composition

def compose(*fns):
return reduce(lambda f, g: lambda x: f(g(x)), fns, lambda x: x)


#### Reversed composition

Like compose, but the functions execute in the opposite direction.

def rcompose(*fns):
return reduce(lambda f, g: lambda x: f(g(x)), reversed(fns), lambda x: x)


#### Conjoined execution

Same input, multiple outputs.

def conjoin(*fns):
return partial(lambda *args, **kw: tuple(fn(*args, **kw) for fn in fns))


Example:

def half(x):
return x / 2

def double(x):
return x * 2

half_double = conjoin(half, double)
print(half_double(20))


#### Remove empty

Remove empty entries.

def remove_empty(items):
return filter(len, items)

def remove_empty_strings(items):
return filter(compose(len, str.strip), items)


#### Negating a function

def negate(fn):
return partial(lambda *args, **kw: not fn(*args, **kw))


#### Assign

This takes a collection of dicts and assigns them together to create a new dict. No dicts are edited, only copies are made.

def assign(*dicts):
return reduce(lambda d1, d2: dict(d1, **d2), dicts, {})


Examples:

assign({'a': 1, 'b': 2}, {'b': 3, 'c': 4})
# -> {'a': 1, 'b': 3, 'c': 4}

assign({'a': 1, 'b': 2}, {'c': 4}, {'dog': 'water'})
# -> {'a': 1, 'b': 2, 'c': 4, 'dog': 'water'}


### What is this?

I really like list comprehensions. They're very powerful tools, but they have a problem in that they don't stack well. If you want to do something like putting a comprehension inside a comprehension, the code probably isn't going to be very comprehensive anymore.

You could store intermediate results in variables, and that can be very readable, but for this type of problem I tend to prefer method chains.

The Slinkie library aims to add these method chains to Python 3.5 and up. It's written with LINQ and JS in mind, and aims to look and feel mostly the same. An important difference to JS is that that it's lazy (just like LINQ).

### How do I install it?

It's available for Python 3.5 and up on PyPI.

pip install slinkie


Import it to your project by saying

from slinkie import Slinkie


### Examples

In a lot of languages, for example JavaScript, it's possible to say things like this:

animals = [lassie, skippy, scooby, cerberus, garfield]

// Find all the good dogs.
good_dogs = animals
.filter(animal => animal.type == 'dog')
.filter(dog => dog.is_good);

// Pat the good dogs.
good_dogs.map(dog => dog.pat());


If you just wanted to pat all the good dogs, you could say something like this:

// Find and pat all the good dogs.
animals
.filter(animal => animal.type == 'dog')
.filter(dog => dog.is_good)
.map(dog => dog.pat());


With Slinkie, the same thing would look something like this:

from slinkie import Slinkie

animals = [lassie, skippy, scooby, cerberus, garfield]

(
.filter(lambda animal: animal.type == 'dog')
.filter(lambda dog: dog.is_good)
.map(lambda dog: dog.pat())
.consume()
)


You'll notice a few differences. The parentheses around the whole expression allows us to write it over multiple lines. Without them, this would be considered a syntax error. Next, you'll notice that Python's lambdas are different from those in both JS and LINQ. You should read up on them if you're not familiar, because the differences are important. The third thing you'll notice is the call to consume() at the very end of the chain. This is because Slinkie is lazy. Consume tells the Slinkie to consume all the elements in the list, and actually do something with them. If you left it out, nothing would actually happen.

Consume is also special in that it doesn't return anything. There are other functions you can call to get, for example, a list or a set back:

good_dogs = (
.filter(lambda animal: animal.type == 'dog')
.filter(lambda dog: dog.is_good)
.list()  # or .set(), .tuple(), or .dict(...).
)


You can also use it as a generator. (Because it actually is a generator).

# Considering Python's rather lacking lambdas,
# it's often nicer to define a small function.
def is_dog(animal):
return animal.is_dog

print(dog.name, 'is a dog.')


For now, I encourage you to look at the unit tests for more toy examples.

For the LINQ developers out there, there are aliases for a number of methods. select can be used in place of map, where can be used in place of filter, etc. SelectMany doesn't have a counterpart, instead you should use flatten.

# Pickle Memo

This works like a regular memo, except it tries to pickle its results.

def pickle_memo(fn):
name = fn.__name__
qualname = fn.__qualname__
memo_file = f"pickles/{qualname}.pickle"

try:
except:
print(f"Warning: could not load {memo_file!r}. Falling back to empty lookup.")
lookup = {}

def _wrapper(*args, **kw):
key = frozenset((args, frozenset(kw.items())))

if key in lookup:
return lookup[key]

result = fn(*args, **kw)
lookup[key] = result
pickle.dump(lookup, open(memo_file, "wb+"))

return result

_wrapper.__name__ = name

return _wrapper


Example:

@pickle_memo
from time import sleep
sleep(1)
return number + 3

if __name__ == '__main__':
for i in range(5):


# Builder pattern

This pattern is a common way of creating instances in Rust. It will help you write cleaner and more readable code that'll be easy to expand later.

Let's dive straight into everyones favorite toy example, the car:


#![allow(unused_variables)]
fn main() {
#[derive(Debug)]
struct Car {
number_of_doors: usize,
color: String,
}
}


Cars have a number of defining properties, let's pretend there's only two for now. (This struct is very simple, so we could of course create an impl with a new() in it in this case, but bear with me.)

In Rust, the builder pattern consists of a struct with multiple functions that consume self mutably and returns it again. It is commonly given the name of the type it builds, followed by the word Builder. In our case, it would be a CarBuilder.


#![allow(unused_variables)]
fn main() {
struct CarBuilder {
number_of_doors: usize,
color: Option<String>,
}
}


This CarBuilder contains data that will later be used to construct the Car. Note that the signature for color differs from the one in Car.

Let's start by giving the CarBuilder a new() and initialize the values.


#![allow(unused_variables)]
fn main() {
impl CarBuilder {
pub fn new() -> Self {
Self {
number_of_doors: 5,
color: None,
}
}
}
}


Here we set the number of doors to 5, as that's a very common number to have. Four actual doors plus one trunk. The color is set to None, because there's no "normal" color for a car to have.

Let's add another function to build a Car. We'll set a rule here that only builds one if we've set a color.


#![allow(unused_variables)]
fn main() {
impl CarBuilder {
pub fn build(self) -> Result<Car, CarBuilderError> {
let color = match self.color {
Some(color) => color,
None => return Err(CarBuilderError::NoColor),
};

Ok(Car {
number_of_doors: self.number_of_doors,
color,
})
}
}
}


You'll notice that build() returns Result<Car, CarBuilderError>. We'll get to CarBuilderError in a bit.

Next we'll add a function that can set the amount of doors. number_of_doors is a usize, and we don't judge here, so users can have millions of doors if they want. That means that we'll only have one rule; don't set the number to 0.


#![allow(unused_variables)]
fn main() {
impl CarBuilder {
pub fn number_of_doors(mut self, number_of_doors: usize) -> Result<Self, CarBuilderError> {
if number_of_doors == 0 {
Err(CarBuilderError::WrongNumberOfDoors)
} else {
self.number_of_doors = number_of_doors;
Ok(self)
}
}
}
}


The rule about the number of doors is enforced by returning an error if the number is wrong.

We will also add a function that lets us set the color. If you look back at the build function, you'll notice that it will fail to build if you don't pick a color.


#![allow(unused_variables)]
fn main() {
impl CarBuilder {
pub fn color(mut self, color: Color) -> Self {
self.color = Some(color);
self
}
}
}


(As we all know, cars can only be one of two colors).


#![allow(unused_variables)]
fn main() {
#[derive(Debug)]
enum Color {
Red,
Green,
}
}


Now all we have left to do is to define those errors we mentioned before.


#![allow(unused_variables)]
fn main() {
#[derive(Debug)]
enum CarBuilderError {
WrongNumberOfDoors,
NoColor,
}

impl Error for CarBuilderError {
fn description(&self) -> &str {
match self {
CarBuilderError::WrongNumberOfDoors => "The car must have at least one door.",
CarBuilderError::NoColor => "The car must have a color.",
}
}
}

impl fmt::Display for CarBuilderError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.description())
}
}
}


That's it. Try it out like this:

fn main() -> Result<(), Box<dyn Error>> {
let car = CarBuilder::new()
.number_of_doors(3)?
.color(Color::Red)
.build()?;

println!("car = {:?}", car);

Ok(())
}


If you've discovered a new feature a car might have, like horsepower or a chromed exhaust pipe tip, you can easily add it now.

Good luck.

# Full example

use std::error::Error;
use std::fmt;

fn main() -> Result<(), Box<dyn Error>> {
let car = CarBuilder::new()
.number_of_doors(3)?
.color(Color::Red)
.has_chromed_exhaust(true)
.build()?;

println!("car = {:?}", car);

Ok(())
}

#[derive(Debug)]
struct Car {
number_of_doors: usize,
color: Color,
horsepower: usize,
has_chromed_exhaust: bool,
}

struct CarBuilder {
number_of_doors: usize,
color: Option<Color>,
horsepower: usize,
has_chromed_exhaust: bool,
}

impl CarBuilder {
pub fn new() -> Self {
Self {
number_of_doors: 5,
color: None,
horsepower: 45,
has_chromed_exhaust: false,
}
}

pub fn build(self) -> Result<Car, CarBuilderError> {
let color = match self.color {
Some(color) => color,
None => return Err(CarBuilderError::NoColor),
};

Ok(Car {
number_of_doors: self.number_of_doors,
color,
horsepower: self.horsepower,
has_chromed_exhaust: self.has_chromed_exhaust,
})
}

pub fn number_of_doors(mut self, number_of_doors: usize) -> Result<Self, CarBuilderError> {
if number_of_doors == 0 {
Err(CarBuilderError::WrongNumberOfDoors)
} else {
self.number_of_doors = number_of_doors;
Ok(self)
}
}

pub fn color(mut self, color: Color) -> Self {
self.color = Some(color);
self
}

pub fn horsepower(mut self, value: usize) -> Self {
self.horsepower = value;
self
}

pub fn has_chromed_exhaust(mut self, value: bool) -> Self {
self.has_chromed_exhaust = value;
self
}
}

#[derive(Debug)]
enum Color {
Red,
Green,
}

#[derive(Debug)]
enum CarBuilderError {
WrongNumberOfDoors,
NoColor,
}

impl Error for CarBuilderError {
fn description(&self) -> &str {
match self {
CarBuilderError::WrongNumberOfDoors => "A car must have at least one door.",
CarBuilderError::NoColor => "A car must have a color.",
}
}
}

impl fmt::Display for CarBuilderError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.description())
}
}


# Cross compile Rust to your Raspberry Pi

## Who is this for?

If you want to get started with ...

• ...cross compiling code for your Pi,
• ...packaging your programs in .deb archives,
• ...all of the above

then this guide is for you. This post serves as a starting point for your endeavours, so instead of going too much in depth, I will try to point you in the right direction.

## Getting started

In this guide I assume that you already have rust and cargo installed. On top of those, I will use cross and cargo-deb.

## Cross compilation

For cross compilation, I recommend the cross tool for cargo. Note that it relies on Docker being installed.

cargo install cross


Once it's installed, it's very easy to cross compile to a wide range of architectures. One thing thing to keep in mind is that it can be very tricky to cross-compile code (or dependencies) that bind to C code. Cross describes how to produce your own Dockerfile, but I found it fairly difficult to get this to run at all, let alone smoothly. Pure Rust projects work very well, though.

### Raspberry Pi 1 and Zero

cross build --release --target arm-unknown-linux-gnueabihf


If you prefer to create statically linked executables, use arm-unknown-linux-musleabihf instead.

### Raspberry Pi 2, 3, and 4

cross build --release --target armv7-unknown-linux-gnueabihf


Again, if you prefer to create statically linked executables, use armv7-unknown-linux-musleabihf instead.

## Create a deb package

I recommend using cargo-deb.

cargo install cargo-deb


Before running any of the following commands, make sure that your code has already been built. This is because cargo-deb defaults to using cargo instead of cross for compilation.

### Raspberry Pi 1 and Zero

cargo deb --no-build --target arm-unknown-linux-musleabihf


### Raspberry Pi 2, 3, and 4

cargo deb --no-build --target armv7-unknown-linux-gnueabihf


Note that this creates a bare-bones archive. You can find more information on this here.

## Launch on boot

The easiest way to get your service up and running at boot is to create a systemd.service file (or a service unit configuration file).

If you're installing your service with a deb file (as described above), and you went with the minimal approach described here, then your executable(s) will be installed in /usr/bin by default. You will probably also want the network to already be available when your program starts. Based on these assumptions, here's what a basic unit could look like:

Store this file in your project under assets/my-awesome-service.service.

[Unit]
Description=My awesome service
After=network.target

[Service]
ExecStart=/usr/bin/my-awesome-service
WorkingDirectory=/home/pi/
StandardOutput=inherit
StandardError=inherit
Restart=always
User=pi

[Install]
WantedBy=multi-user.target


To add it to your deb archive, you'll want to edit your Cargo.toml file, and add an asset under [package.metadata.deb].

[package.metadata.deb]
assets = [
["assets/my-awesome-service.service", "etc/systemd/system/", "755"],
]


(If that didn't make sense, you'll want to look at cargo-deb's configuration).

Once this is installed, you should now be able to monitor your service using systemd's service command.

sudo service my-awesome-service start
sudo service my-awesome-service status
sudo service my-awesome-service stop


Note that the executable will run as the pi user, and use /home/pi as its working directory. This might not be right for you. If you want to do this right, you'll probably want to useradd -r a system account, and giving it as few privileges as possible, as well as a place to store configurations under /etc/my-awesome-service. You may also want to store its logs under /var/log/my-awesome-service/main.log, which you of course want to rotate, etc. This post isn't really about any that, though. It's just about getting you started :)

Good luck!

# Bumper

## How?

It finds Cargo.toml in your current working directory, and edits the version. If you don't specify anything, it bumps the patch version. You can also tell it to bump the major or minor versions.

When you bump a number, the versions under it reset to zero automatically.

### Bump the patch version

$bumper 0.2.2 -> 0.2.3  or $ bumper patch
0.2.3 -> 0.2.4


$bumper minor 0.2.4 -> 0.3.0  ### Bump the major version $ bumper major
0.3.0 -> 1.0.0


## Install it

$cargo install --git https://github.com/segfaultsourcery/bumper-rs  # Scaffold ## Quickly add dependencies to your existing Rust project. I find myself always scouring the internet or looking through old projects to find the same dependencies over and over. This is a tool I made to automate that process. Find it on crates.io The help screen really says it all. scaffold 0.2.0 Quickly add dependencies to your Rust project. USAGE: scaffold [FLAGS] [OPTIONS] <SUBCOMMAND> FLAGS: -a, --ask Ask before each dependency. -h, --help Prints help information -V, --version Prints version information -v, --verbose Be more verbose. OPTIONS: -g, --groups <groups-path> [default: ~/.config/scaffold/groups.toml] -p, --path <toml-path> [default: Cargo.toml] SUBCOMMANDS: add Add groups to your project. help Prints this message or the help of the given subcommand(s) list List all available groups.  ### Define custom groups By default, scaffold will look for groups in ~/.config/scaffold/groups.toml. If this file doesn't exist, it will be created. For the sake of convenience, groups.toml is a toml file, with the intent of looking and feeling like Cargo.toml. Example: [json] serde_derive = "*" serde_json = "*" serde = { version = "*", features = ["derive"] } [cli] structopt = "*" config = "*" shellexpand = "*"  Note that if the version is starred, then scaffold will try to determine the latest version. ### List available groups You can list all your available groups: $ scaffold list


Result:

cli
config = "0.9.3"
shellexpand = "1.0.0"
structopt = "0.3.4"
json
serde = { features = ["derive"], version = "1.0.102" }
serde_derive = "1.0.102"
serde_json = "1.0.41"


$scaffold --verbose add json  Result: Adding serde = { features = ["derive"], version = "1.0.102" }. Adding serde_derive = "1.0.102". Adding serde_json = "1.0.41".  You can add more than one at the same time: $ scaffold --verbose add json cli


Result:

Adding serde = { features = ["derive"], version = "1.0.102" }.


### Asking before inserting each crate

You can also tell it to ask you before each crate to see if you want it:

$scaffold --ask --verbose add json cli  Result: Add config = "0.9.3"? [Y/n] y Adding config = "0.9.3". Add shellexpand = "1.0.0"? [Y/n] n Add structopt = "0.3.4"? [Y/n] y Adding structopt = "0.3.4".  # Concat A simple utility for concatenating lines from stdin. ## Motivation I often find myself in the situation where I need to scrape data from logs, use combinations of grep, cut, and sed to extract values, then turn those values into various SQL queries. The last step is always the most annoying, so I decided to spend a few minutes to make this tool. ## Examples ### Example file ###### animals.txt Lassie Flipper Willy  ### Raw Put everything together with no delimiter or quote marks: $ cat animals.txt | concat
LassieFlipperWilly


$cat animals.txt | concat -q "'" 'Lassie''Flipper''Willy'  ### Let's add a delimiter $ cat animals.txt | concat -d ", "
Lassie, Flipper, Willy


### Let's do both

$cat animals.txt | concat -q "'" -d ", " 'Lassie', 'Flipper', 'Willy'  # GithubApi ## This is a work in progress! ### Please don't put it into production yet. ## What's there? Not a whole lot, yet. The list of examples is pretty exhaustive. ## How do I use it? Start by adding it to your dependencies, then check the examples. [dependencies] githubapi = { git = "https://github.com/segfaultsourcery/githubapi", tag = "v0.1.0" }  ## Envelope ### Result type Every endpoint returns Result<GitHubApiResult<T>, GitHubApiError>. #### GitHubApiResult  #![allow(unused_variables)] fn main() { pub struct GitHubApiResult<T> { pub result: T, pub raw_result: String, pub limits: Option<LimitRemainingReset>, pub owner: Option<String>, pub repository: Option<String>, pub next_page: Option<u64>, } }  #### GitHubApiError  #![allow(unused_variables)] fn main() { pub enum GitHubApiError { NotImplemented, JsonError(JsonError, String), GitHubError(String, String), ReqwestError(ReqwestError), } }  ## Examples ### Get rate limit  #![allow(unused_variables)] fn main() { let gh = GitHubApi::new(Credentials::UsernamePassword(username, password)); let result = gh.get_rate_limit(); println!("{:#?}", result); }  ### Get license  #![allow(unused_variables)] fn main() { let gh = GitHubApi::new(Credentials::UsernamePassword(username, password)); let license = gh.get_license("segfaultsourcery", "githubapi"); println!("{:#?}", license); }  ### Get releases  #![allow(unused_variables)] fn main() { let gh = GitHubApi::new(Credentials::UsernamePassword(username, password)); for page in gh.get_releases("segfaultsourcery", "githubapi") { println!("{:#?}", page); } }  ### Get tags  #![allow(unused_variables)] fn main() { let gh = GitHubApi::new(Credentials::UsernamePassword(username, password)); for page in gh.get_tags("segfaultsourcery", "githubapi") { println!("{:#?}", page); } }  # Simple data stitching I found myself in a situation where I had a number of CSV files that all shared some key data, and all had to be put together to a larger dataset. I figured that the easiest way to do this would be to deserialize the files, then stitch them together using a portion of their data as a key. I decided to try my hand at writing a macro to solve the issue, and I ended up with two of them; one for one-to-one relations, and one for one-to-many. ## The problem I was solving I had two files. One had two pieces of information, A and B, and the other had another two pieces, B and C. What I really needed was a HashMap with A and C. The information connecting the two were the B columns. Let's put this in easier terms with a useless toy example. The first file, user_email.csv has a username and an email address, the other file, email_color.csv, has an email address and a favorite color. I want to be able to go from username to favorite color directly, and cut out the email address. ## Example files ##### user_email.csv usernameemail alicealice@example.com bobbob@example.com carolcarol@example.com ##### email_color.csv emailcolor alice@example.comred bob@example.comgreen carol@example.comblue ## Some nice aliases These things tend to be a little easier to read and understand if you make a couple of type aliases.  #![allow(unused_variables)] fn main() { type Username = String; type Email = String; type Color = String; }  ## Deserialization Now we need to create two structs that I could deserialize the data into:  #![allow(unused_variables)] fn main() { /// The struct for user_email.csv. #[derive(Deserialize)] struct UserEmail { username: Username, email: Email, } /// The struct for email_color.csv. #[derive(Deserialize)] struct EmailColor { email: Email, color: Color, } }  I used the csv crate to deserialize the files. It does a great job and they explain how to use it quite well. ## Using stitch_one_to_one The next step is to run the macro. The macro will return a Vec<(Left, Right)>. In our case, Left will be UserEmail and Right will be EmailColor.  #![allow(unused_variables)] fn main() { // Supplying these functions is left as an exercise to the reader :) let user_emails: Vec<UserEmail> = deserialize_user_emails(); let email_colors: Vec<EmailColor> = deserialize_email_colors(); let result = stitch_one_to_one!( user_emails, (email), email_colors, (email) ); }  The anatomy of the call above is this:  #![allow(unused_variables)] fn main() { stitch_one_to_one!( lefty_items, (a, b), // The lefty key to use. righty_items, (x, y) // The righty key to use. ); }  The keys above translate into tuples of (left.a, left.b) for all lefty items, and (right.x, right.y) for all righties. The keys must be of the same length, and they will be compared to each other, so they should probably be of the same type as well. To clarify, all the items in the left key must be members of the left item. The same goes for the right item; the items in its key must be members of right. For UserEmail, a valid key would be any combination of username and email. ## Finishing up Like I mentioned earlier, the macro will return a Vec<(UserEmail, EmailColor)>, and now it's up to you to traverse them and produce something that makes sense to you. In my case, this was a HashMap<Username, Color>. This is how I produced it:  #![allow(unused_variables)] fn main() { let lookup: HashMap<Username, Color> = items .iter() .map(|(left, right)| { ( left.username.to_string(), right.color.to_string(), ) }) .collect(); }  And we're done! # One to one ## Macro  #![allow(unused_variables)] fn main() { macro_rules! stitch_one_to_one { ($left:expr, ($($left_key:tt),*), $right:expr, ($($right_key:tt),*)) => {{ let mut right_by_key = HashMap::new(); for item in$right {
let right_key = ($( &item.$right_key ),*);
right_by_key.entry(right_key).or_insert(item);
}

$left .iter() .filter_map(|item| { let left_key = ($( &item.$left_key ),*); match right_by_key.get(&left_key) { Some(it) => Some((item, it.clone())), None => { eprintln!("Couldn't find items matching {:?} for {:?}.", left_key, item); None } } }) .collect::<Vec<_>>() }}; } }  ## Example usage  #![allow(unused_variables)] fn main() { fn show_one_to_one( user_emails: &[UserEmail], email_colors: &[EmailColor], ) { let items = stitch_one_to_one!( user_emails, (email), email_colors, (email) ); let lookup: HashMap<Username, Color> = items .iter() .map(|(left, right)| { ( left.username.to_string(), right.color.to_string(), ) }) .collect(); println!("lookup = {:#?}", lookup); } }  # One to many ## Macro  #![allow(unused_variables)] fn main() { macro_rules! stitch_one_to_many { ($left:expr, ($($left_key:tt),*), $right:expr, ($($right_key:tt),*)) => {{ let mut right_by_key = HashMap::new(); for item in$right {
let right_key = ($( &item.$right_key ),*);
right_by_key.entry(right_key).or_insert(vec![]).push(item);
}

$left .iter() .filter_map(|item| { let left_key = ($( &item.\$left_key ),*);
match right_by_key.get(&left_key) {
Some(it) => Some((item, it.clone())),
None => {
eprintln!("Couldn't find items matching {:?} for {:?}.", left_key, item);
None
}
}
})
.collect::<Vec<_>>()
}};
}
}


## Example usage


#![allow(unused_variables)]
fn main() {
fn show_one_to_many(
user_emails: &[UserEmail],
email_colors: &[EmailColor],
) {
let items = stitch_one_to_many!(
user_emails,
(email),
email_colors,
(email)
);

let lookup: HashMap<Username, Vec<Color>> = items
.iter()
.map(|(left, rights)| {
(
rights
.iter()
.map(|it| it.color.to_string())
.collect(),
)
})
.collect();

println!("lookup = {:#?}", lookup);
}
}


My DB of choice.

# Slope and intercept

I recently discovered something very nice. Postgres has several functions geared towards helping you with statistics. I will cover two of them here.

Let's use it to solve something familiar: $$y = mx + b$$

You can use it to draw a straight line through a set and pretend you've made an AI.

## Functions

We'll be using two functions for this.

### Slope

regr_slope(Y, X)

The description says "slope of the least-squares-fit linear equation determined by the (X, Y) pairs". In other words, this is our $$m$$.

Note that Y comes before X.

### Intercept

regr_intercept(Y, X)

The description says "y-intercept of the least-squares-fit linear equation determined by the (X, Y) pairs". In other words, this is our $$b$$.

Note that Y comes before X.

## Example

Let's make a table and push some data into it:

select
generate_series(1, 10) x,
generate_series(4, 31, 3) y
into
toy_example;

 x  | y
----+----
1 |  4
2 |  7
3 | 10
4 | 13
5 | 16
6 | 19
7 | 22
8 | 25
9 | 28
10 | 31
(10 rows)


Now, just from looking at it, we can tell two things; $$m = 3$$ and $$b = 1$$. Like I said, it's a toy example. Let's still verify our assumptions by having Postgres give us those numbers.

select
regr_slope(y, x) as m,
regr_intercept(y, x) as b
from
toy_example;

 m | b
---+---
3 | 1
(1 row)


Looks good so far. Let's use it to calculate the next few values

select
x, m * x + b as y
from
generate_series(9, 14) as x,
(
select
regr_slope(y, x) as m,
regr_intercept(y, x) as b
from
toy_example
) as mb;

 x  | y
----+----
9 | 28
10 | 31
11 | 34
12 | 37
13 | 40
14 | 43
(6 rows)


Congratulations, you can now predict the future. Go forth and play the lottery!

## Notes

#### Reddit user /u/spinur1848, in a comment:

Probably shouldn't be playing with those without regr_r2, which gives you square of the correlation coefficient.

The slope and intercept functions will (almost) always give you values, even if the quality of the regression is garbage. The squared correlation coefficient is a measure of goodness of fit. You should probably calculate this number any time you would want to calculate a slope and/or an intercept.

# Added support for gitignore files

The watch (and serve) command will now ignore files based on .gitignore. This can be useful for when your editor creates cache or swap files.

I saw that others were having the same issue I'm having, namely that mdBook is frantically rebuilding files with every keystroke, because the editor keeps saving it in a cache file. This allows you to add those files to .gitignore. In my case, adding a line with "*.kate-swp" fixed the problem completely.

https://github.com/rust-lang/mdBook/pull/1044

Github user chinedufn reported the following problem:

I started adding the mdbook-linkcheck plugin to my book build process and was confused when CI was still passing even though I had forgotten to install the plugin in my ci config.

#!/bin/bash -eo pipefail
(cd book && mdbook build) && cargo doc --no-deps --document-private-items -p foo && cp -R target/doc book/book/api
2019-03-25 16:25:00 [INFO] (mdbook::book): Book building has started
2019-03-25 16:25:00 [INFO] (mdbook::book): Running the html backend
2019-03-25 16:25:01 [INFO] (mdbook::book): Running the linkcheck backend
2019-03-25 16:25:01 [INFO] (mdbook::renderer): Invoking the "linkcheck" renderer
2019-03-25 16:25:01 [WARN] (mdbook::renderer): The command wasn't found, is the "linkcheck" backend installed?
2019-03-25 16:25:01 [WARN] (mdbook::renderer): 	Command: mdbook-linkcheck
Documenting tw v0.1.11 (/root/foo-barg/app/crates/foo-bar-cli)
Finished dev [unoptimized + debuginfo] target(s) in 2.16s
`

He expected that mdbook's build would fail with a non zero exit code if he tried to use a plugin it couldn't find.

https://github.com/rust-lang/mdBook/pull/1122

# War stories

Short stories from my career.

# Home Automation System

This is one of my older projects that I occasionally work on.

The fun thing about it is actually how it got started. It all began with me working the night shift at a factory. I used to have a very hard time sleeping during the day for two reasons; it was hot in my bedroom, and it was very bright. I got around the problem with the light by wearing a sleep mask, but it was still much too warm.

I experimented with different methods. First I put a couple of fans in the windows. This turned out to be a little bit too noisy, and the constant draft was annoying. After that I tried an air conditioner. This also wasn’t very good, as it was even noisier and I had to leave a window open to let the hot air out. This let more hot air in, however, and I quickly abandoned it.

Next I decided to try experimenting with timers. I wrote a simple Arduino sketch that let the fans be on for 15 minutes, then off for 15 minutes. This had the effect of making the room to alternate between being too hot and too cold throughout the day, and fiddling with the numbers didn’t really help. What worked on one day didn’t work the next.

This is when I decided to make better use of my Arduino. I connected a simple thermistor to it, and gave it some upper and lower temperature bounds, and let the code decide when it was time to turn the fans on and off. This was much better. On colder days the fans didn’t have to run at all, on hotter days they were working overtime to keep the heat down. I could live with the noise on those days, and the quality of my sleep improved a lot.

I still had a problem with the light. I experimented with different methods of keeping the room dark, and I eventually found something that worked pretty well. Too well. Now I had my perfectly dark, and nicely temperature controlled room, and I couldn’t wake up! I would frequently sleep through all of my alarms, to the point where I could swear that it never rang at all.

I went back to the drawing board. I already had a simple machine that could control my fans. What if I added a few relays and let it control my lights as well? I had it turn every light on in the room at the same time as my alarm went off, and had the worst waking experience in my life. The sudden pain in my eyes and the blaring alarm came together to scare me awake. It also wasn’t very flexible, as I couldn’t really set the time when it should go off. On work days it was okay, but it didn’t work well for weekends, and I didn’t want to upload a new sketch twice every week.

That’s when I decided to dig out an old laptop, and move all the decision making to it. The Arduino would report its temperature, and a program on the laptop would decide what the fans should do. It would also keep track of time, and start turning the lamps on in a sequence. Lamps far away from me would turn on first, and end with the one on my bedside table. The experience was again much better and waking up was much easier.

It was at this point that I realized I had just created a home automation system, but I didn’t have a good name for it; systems like this has existed for decades, but they were only made popular in the last few years, and I didn’t know of one at the time.

At this point I was very proud of what I had made, and I decided to add another feature to it. Unsure what it would be, I decided to look over what I had access to. I realized that the laptop I was using to control it had a perfectly good sound card that I wasn’t using. I decided to add a script that would start playing music a few minutes before my alarm. I again discovered that this was a little bit too crude. The laptop speakers were a little too weak, yet having it start blaring distorted music at full volume was annoying. I decided to hook a stereo into it with a fairly big speaker. Another relay would turn the stereo on at the appropriate time, and the laptop was plugged into it. The laptop would start playing its music, but with its volume set to 0%. It would then gradually, over the course of 5 minutes, raise the volume until it hit 100%. The stereo’s own volume knob was set to a comfortable volume.

At this point I got to sleep in a nice and cool, dark room, and my machines would wake me up in a very comfortable and effective manner. It was perfect!

It was perfect. It was perfect until I realized that one very important piece was missing; it wasn’t making me coffee in the morning. I went out and bought a very simple drip coffee maker, and put its power switch in the on position, then I connected its outlet to a relay so it would turn on as soon as it got power. It would then turn on at the same time as the stereo, and I would wake up every morning with the smell of coffee in my nose.

This was perfect. I used this system for years, and only stopped because I moved to a different apartment. Some version of this system has always survived, though. This summer I experimented with having it water one of my plants. It took complete care of this plant’s water needs for about six months. This is the longest I have ever kept a plant alive, so I’m very happy with the results.

The latest iteration of this system is using Raspberry Pi computers instead of the old laptop, and they communicate over the MQTT protocol.

# Revolving doors

I briefly worked with a project for a large company, known for its locks and entrance solutions. My part in had to do with the internal message passing system for their revolving doors, as well as testing the software. I also worked on the protocol and wrote quite a bit of its documentation.

The very first task was to assemble a control unit. A control unit is a number of custom computers running a popular Real Time OS, similar to FreeRTOS or SafeRTOS. The computers communicate using a popular message bus, popular in multiple large industries. Assembling it meant producing my own cables and connectors, and flashing the software. Mistakes were made in assembling it; sparks were shot, and appropriate countermeasures in the form of electrical tape were deployed. All in all, it went well, and I had my machine.

The code was mostly written in C and C++, although some tools were written in Python. The operating system of choice for the developers was Ubuntu, and other people at the company I worked for were developing an app in Kotlin that was used to control the machine.

One of the most important things I worked with was to ensure that very important messages were guaranteed to be sent through the system. Very important messages included when it was time to hit the brakes. A revolving door may rotate slowly, so people tend to forget what they are. Make no mistake. They consist of hundreds of kilograms of steel and glass. If it runs into you, you will be pushed, and you may get hurt. However, if it pinches your body part between a wing and the wall, then make no mistake, you will lose that body part. Therefore it’s of utmost importance that the machine notices you and immediately deploys the brakes if it thinks a collision is about to happen. I recently witnessed this work properly twice in short succession, here in Germany. A child decided to run through the opening with only centimeters to spare. The door stopped immediately to avoid a collision. With the child out of the way (he was now inside the chamber of the door), the door decided to move again. His father yelled at him to come back out, and the child ran out through an even smaller opening. Again, the safety measures worked perfectly and the brakes were deployed. That’s not to say I wasn’t scared, however. I have learned to not blindly trust safety equipment.

Since I was working on the messaging system and its protocol, I was often working tightly with the Kotlin team, giving them support as needed. I would update the documentation, add examples, or help them write their code. On occasion we also changed the protocol to better match what they were doing, and I made reference implementations they could study.

# Experimental proprietary hardware and molten metal

The Kotlin team I mentioned earlier was also responsible for writing an app that controlled a piece of experimental technology. The technology was brilliant in that it was smart enough to determine the intent of a user and to perform an action in response, without the user ever knowing it was there. Due to an NDA, I cannot describe its function in further detail. I can, however, tell a story of how I got involved in its development. I will refer to it as the “Unit” from now on.

The app is developed in Sweden by a consulting company. The purpose of this app is to provide a maintenance interface for technicians to do maintenance. The Unit had a proprietary protocol and a proprietary port. It also had an advanced detachable adapter.

The hardware and its firmware was developed by another company in another country. It was far away, but remained in the same time zone. They handed off the manufacturing of all their hardware to a company in China. This turned out to be a big problem, because the Chinese company would deliver the Swedish company a Unit to test against, but they forgot to deliver the adapter. Because of this, we had no way of talking to it. Seeing as I had some experience with marrying software to hardware, I was asked if there was anything I could do. The alternative was to wait for up to six weeks for the adapter to arrive, which would hurt productivity quite a lot.

I contacted the hardware designers, who gave me schematics and a description of the protocol. It turned out that I couldn’t easily produce the connector the adaptor was using, but I could solder new leads onto the board. This was tricky, as the Unit was very small, but using the finest soldering tip I could find and a magnifying glass, I succeeded.

It turned out that communication could be done quite simply with some scrap we had laying around the office. One of the most important parts came from an Arduino. These boards use a TTL for programming and communication, and you can access it from your computer by connecting the reset pin to ground. The RX and TX pins can then be used as a TTL. Most of the complexity came from the software itself. I hooked the leads into the Unit and used the software specification to write my own adapter software in Python. This unblocked a team that would have been much less productive for well over a month, and it only took me a day and a half. I’m pretty proud of this, and the CEO at the time even decided to give me a little reward for it.

# Linux driver hacking

This is the story of when a company dropped a tiny computer in my lap and asked me to get it working. It was not quite a single board computer, as it was built in several layers. Compared to something like a Raspberry Pi, it was quite thick. They didn’t give me any information about it, only a small list containing what they wanted to use it for, and which operating system they needed. The rest was up to me.

The journey to success started with a screwdriver and a soldering iron. To learn the brand and model of the computer I had to disassemble it quite a bit. Eventually I managed to install the version of Ubuntu they needed. I also installed their custom software to ensure that everything was working. I discovered three glaring issues: the wifi didn’t work, 3G wasn’t working, bluetooth wasn’t working, and it had a strange issue where the monitor turned completely white when I started the software.

It turned out that wifi and 3G was combined into one chip, and the drivers for it had been abandoned for a long time. As a result they were no longer compatible with modern Linux kernels, and I was not allowed to downgrade for security reasons. The monitor was white because the software required precisely one monitor. This particular computer had a hardware based frame buffer, which showed up to the OS as an extra monitor. This was surprisingly difficult to disable, and I had to get clever with it.

The first issue I tried to solve was to get the wifi and 3G chip working. This was also surprisingly hard, as the OS wouldn’t identify it at all for me, and the machine’s own documentation didn’t list it. I had to tear the machine down and dissolve some glue before I could find out which chip it was. The next issue was to find the drivers. It turned out that the big problem with them was that Linux’ timer API has gone through a major rewrite. The drivers I was working with were long abandoned, and it fell to me to patch them, which surprisingly wasn’t too difficult. The documentation for the new and old timing systems was readily available, and the driver code was well written C.

After getting wifi up and running I had to come up with clever ways of disabling its hardware based frame buffer. Again, the machine’s documentation failed me, and the time consuming solution turned out to be a combination of BIOS flags and X11 settings.

I eventually fixed all the issues, which took somewhere along the lines of 10 days in total.

# Product Planning System

This is the first of two major systems I wrote for a company producing food. I was working in a small team. We added and lost members along the way, but the core team consisted of me, writing business logic, and a guy who wrote the database, a product manager, and a software architect.

The software we were writing was to replace very old software system that the company was completely relying on. The old software was running on an old AIX mainframe computer. It was written in a dead language called NATURAL, and it used a not quite dead database named Adabas for storage.

It was written almost 30 years ago by people who are now dead. In its humble beginnings, it was serving a client computer with a text prompt. The user could type their user ID into it, and it would tell them which pallets they should load onto their trucks for delivery. Over the decades, it evolved into a multi-user system with a Telnet frontend. When I arrived at the scene it was hundreds of thousands of code lines long, and it had many concurrent users working with it around the clock in multiple cities. It was capable of doing advanced product planning, and it could plan when to send trucks, which pallets to put in each one, and which stores to send them to.

It had many problems, one being that the Telnet based front end was almost impossible to use unless you had received months of training. It was also slow. Very slow.

We had access to two people. One retired very soon after we started, and was generally helpful. The other felt threatened by us and decided to be very unhelpful. She would frequently claim that we were wasting everyone’s time and money by writing a replacement, because it was mathematically perfect, completely without bugs, and that it had been optimized for speed for decades, so it couldn’t possibly get any faster. She also refused to help us understand the NATURAL code, and would only share it with us on paper. We were given papers in the hundreds, and needless to say, they didn’t help one bit. Her general unhelpfulness eventually got her fired, and we had to resort to other means of understanding the old program.

The first thing we did was to interview its users. I was in charge of business logic, so I did most of the interviewing. We were assured that the users knew best, and that asking them would give us all the information we needed. This was absolutely not the case, it turned out, as most users would give conflicting answers, and those conflicts would conflict with how the program actually behaved. Reference implementations based on the user testimonies would be ripped apart by the other users. I tried to get around this by interviewing them in groups. That led to groups disagreeing with each other.

In the end, I abandoned the user interviews entirely and adopted another method; I decided to reverse engineer the program, starting with known good cases. At this point I still didn’t have a good grasp of how the Telnet application worked, so I scheduled a meeting with a power user. I laid out many valid scenarios that should work, and my hypothesis on how it should behave. I noted what the program actually did, and if it would differ from my hypothesis, I would document it thoroughly.

This still wasn’t enough. I hadn’t caught any of the corner cases. I sat down again and devised a long list of scenarios that shouldn’t work. Things like “you have three trucks, but only one pallet”, and “a product needs to go to a location, but no truck is going there”. The power user was very annoyed with me, refused to help me at first, because these things would never happen, and nobody would ever put this into the system. I insisted and he complained to my boss, who first told me to stand down. I explained that I was trying to find important corner cases, and begged them for a short session of this. In order to learn how an algorithm works, it’s incredibly important to know how it behaves when given bad data.

Remember how the unhelpful developer said there were no bugs? It turns out that she was wrong. We uncovered a great deal of errors in this process, and we learned a lot about the algorithm. Using what we learned doing this, we could answer several questions I had, and I could finally write a reference implementation that a lot of people could agree was somewhat working the way they expected. It still had bugs and issues, but given the power user was much more comfortable letting me do preposterous things to the system by now. In total, I uncovered almost 800 rules using reverse engineering.

We also managed to speed the algorithm up substantially. The original NATURAL code had a runtime measured in half hours. The slowest operation of the day happened in the morning when a new set of orders would drop in. It took hours. With our improved algorithm we got the time down to minutes in the worst case, and less than one second in the best case.

Initially this made our users think something had silently gone wrong, and they would complain to us. When you make an algorithm so much faster that people think it’s not working, you know you’ve made an improvement.

The second thing we drastically improved was in correctness. We identified many subtle problems when we subjected the code to my tests, and in fixing those, the code would produce fewer pallets. Fewer pallets means fewer trucks, and fewer trucks means major savings for the company.

In the end people were very happy with it, and did not want to go back to the old way of doing things. For software developers, that is the highest possible praise. We are always prepared to hear users complain about how much better the old system was, and how we failed to take some obscure user or use case into account.

# Personnel Planning System

This is the second of two major systems I wrote for a company producing food. In this case we didn’t have a reference implementation to replace. They did have a big process in place that relied heavily on Excel sheets, however. This sounded like a very easy job until we realized that they had an entire code base written in VBscript inside those files, which in turn relied on an Access database. It turned out that we had a reference implementation after all.

We ended up writing a web app based on React. Once again, I wrote the business logic together with the database developer from the previous project, and we had other people come and go to either help out with the business logic or the GUI.

The users were very free to do what they wanted to with the existing Excel program, and we didn’t want to alienate them, so we tried to retain support for as much of it as possible in one way or another; if it could be formalized and turned into a set of rules, we would keep it and make it accessible for everyone.

We ended up making an interesting database design for this project. The users had a strong need to be able to undo actions, so we ended up with a tree based database structure, where changes would be layered on top of each other. If you have ever played with photoshop, you know exactly how this works, but if you haven’t, then you can imagine changes being drawn on panes of glass. Every change gets its own pane of glass, and to undo a change, you only have to remove its pane.

Just as with the other system we developed for this company, the users did not want to go back to the old way of doing things. This is high praise for software developers.

# Article Registry Service

This is a small project that sprung out of the first large project I wrote for a large food producer. The company in question had very big and complex software solutions, and a lot of the business logic was more grown than designed. This led to a lot of confusion, and different teams interpreted the data and its rules differently, which led to subtle problems further down the line.

One area was particularly problematic, which is the article registry. In other words, the products this company produced. They had a large database with many, many different ways of expressing even simple things, like availability.

To make the problem even worse, the database was very slow. Fetching the thousand articles the system knew about would take around 20 minutes. Most systems needed this information available at all times, and quickly, so they all ended up developing their own caches and updating logic. Needless to say, none of the systems agreed on what the products were.

I resolved this issue by writing a glue service that would interpret the rules. Correctly this time. I wrote a plethora of tests to ensure that I was interpreting everything correctly, and I weeded out at least a few subtle bugs in every system using the articles. I also made the tests part of the service itself, so it could interpret the data and give detailed feedback on any errors. This is the part I'm most proud of, because it allowed my users to finally understand what had gone wrong. The service also stored a cache of all articles that would update automatically when the source material updated, so nobody needed their local caches anymore.

This simplified quite a few code bases, and was generally regarded as a big success.