What Laravel's encrypted casts can protect you from

#

The other week I asked a very pointy question on the PHP Australia Slack channel:

Real talk, is encrypted casting security theater?

My question was half genuine confusion, and half Cunningham’s Law.

Encrypted casting is a feature in Laravel’s Eloquent ORM which allows us to very transparently encrypt a value in one database column of a model. This encryption happens in the PHP code, using the global app key.

The encryption happens inside the ORM layer, so our app code doesn’t even need to know about it. It felt so transparent, to me, that it may as well not exist.

Beyond that, and of course the usual high standard of security that app servers are held to to prevent SQL injection or other vulnerabilities, I wasn’t seeing the benefit of encrypting specific colums more. But the good folks of PHP Australia provided me with some great insight into what other threats this feature can mitigate.

Thanks to Andrew, Pat, valorin, James and Samuel for helping me think through this. I appreciate your patient explanations!


It's a 1+n problem, not a n+1 problem

#

This is probably my most trivial tech pet peeve and I want to get it off my chest.

I have been perplexed at the common use of the term “n+1 query problem”, maybe popularised by this page on it in the SQLite docs:

Many Small Queries Are Efficient In SQLite

While the page’s content is interesting, for as long as it has been on the Wayback machine it has used the phrase “N+1 Queries Are Not A Problem With SQLite”:

https://web.archive.org/web/20240000000000*/https://sqlite.org/np1queryprob.html

The problem is typically caused by code that looks something like this:

// select * from posts...
$posts = Post::all();
foreach ($posts as $post) {
    // select * from users where id = ...
    $user = $post->author();
    // use $user...
}

This code will perform a single database query returning n posts, then for each post it will run a query returning one user, for a total of n additional queries. That’s 1 and then n round-trips to and from the database.

In the typical three-tier, remote database server setup that most webapps use, this is inefficient. (Not so when using SQLite, which is the point of their article above.) There are many ways to fix this kind of problem, depending on the framework or ORM you’re using.

Referring to this as an “n+1 problem” annoys me because the problem is that you do one query first, which is fine, then n queries afterwards, which are the problem. This phrasing emphasizes the 1, which isn’t the problem!

I wouldn’t die on this molehill, but I use the phrase “1+n problem” instead and I hope it catches on.


Why there should only be 5 HTTP response codes

#

I’ve developed a strong opinion about exceptions over the last few years. There are only two pieces of information that all exceptions should include:

When the client and the provider are not on the same software team, these are the only two answers that determine how an error can be handled.

Now, errors should include helpful details beyond just the answers to the questions above. For example, if the email input in a form is incorrect, the response must tell the user exactly which field is wrong and why it is wrong. But these details are pretty much always down to humans to interpret.

This leads me to the conclusion that there are only 5 necessary HTTP status codes:

*I’ll allow that the semantics of HTTP mean there probably needs to be more than 1 code for redirects. Though I reckon the differences between temporary and permanent redirects etc. could have been expressed just as well with response headers.

I’ve framed this in terms of HTTP response codes, but the same issues apply to exceptions inside your own code. When throwing an exception, ask yourself the questions: who needs to fix this, and when (if ever) should the client retry?


Lightning talk on JSON:API at PHP Sydney

#

Tonight I gave a lightning talk at the last PHP Sydney Meetup of the year.

The subject was “An Introduction to JSON:API”, which is the spec we use for our APIs at Pylon:

https://jsonapi.org

The slides are online here:

https://static.crabmusket.net/2024/json_api_php_sydney/

And the recording:

https://youtu.be/cc4N2TyXAKI?si=E5ypdUbZUYXspDV6&t=3895

It was the third talk I’d given at the meetup this year. The talk went well, despite me forgetting to bring the HDMI adapter port for my Framework laptop.

A few small things I forgot to say during the talk:

I was both relieved and very slightly disappointed nobody asked me afterwards about JSON Hyper-Schema (I guess they were just respecting my clearly-stated wishes).

On the subject of JSON:API, here are a few articles that have influenced me:


Lessons from the JSON-RPC mailing list

#

I’ve used JSON-RPC before as a simple request-response protocol for Websocket services. It is a simple way to avoid bikeshedding.

I was recently looking into tRPC to compare and contrast, and my research led me to the JSON-RPC mailing list. I discovered a lot of interesting ideas in there which you might not have appreciated if you’ve only used JSON-RPC in a superficial way, e.g. over HTTP to a server, or even over a Websocket like I was.

Often, someone would ask why they had to provide an id, since one HTTP request would always result in one HTTP response, so the id could just always be "1". The replies highlight both the centrality of id to the spec, and the real meaning of JSON-RPC being transport-agnostic:

Ajax-type one-off requests make this hard to see, but the id is vital to the client sorting out its many messages. It has little to do with the server, and probably never should.

https://groups.google.com/g/json-rpc/c/vOFAhPs_Caw/m/bMYHo4rdGHoJ

On the client living longer than the transport:

If the transport is handling the role the id currently fills, then yes… but that also means the client using the transport needs to understand that. If you a client makes 30 calls to 4 different servers, having the ids allows a central tabulation of the current call states. What you are suggesting would require that meaning to be injected or stored in the currently running transport instances. This does not work when you want to either freeze the client state, or have requests that outlive the transport instances.

https://groups.google.com/g/json-rpc/c/vOFAhPs_Caw/m/LiqTE-kTH7kJ

On the over-reliance on HTTP and especially GET:

Exactly. HTTP GET has been an issue with json-rpc all the way through, as HTTP GET is often misused… perhaps a thread specific to addressing this angle of it will lead to some clarity for all of us on how to move forward.

https://groups.google.com/g/json-rpc/c/vOFAhPs_Caw/m/LiqTE-kTH7kJ

You keep linking the request object to the transport instance life time and suggesting optimizations based on that view. json-rpc is used in places where that is not true and there is little gained from the “optimization”.

https://groups.google.com/g/json-rpc/c/vOFAhPs_Caw/m/FLCBaleT0UAJ

On notifications and empty responses,

if the function being called has no meaningful return value, adding an id in the call merely forces the server to add a useless NULL response

The NULL (or “void”) response isn’t useless - it allows the client to verify that the server received and processed the request. In some transports like HTTP this may be duplicative of the HTTP response which comes anyway, but in most other transports it is not.

https://groups.google.com/g/json-rpc/c/vOFAhPs_Caw/m/ozmF478rpTkJ

So not only might we be using a transport like Websockets, which doesn’t have request-response semantics, we might actually make requests that outlive the lifetime of a transport.

An easy example is: what if your Websocket disconnects due to a temporary network outage. If you’re used to an HTTP world, you might expect that all your in-flight requests must just be abandoned and retried. But JSON-RPC allows you to keep ahold of those ids and match up responses you get once you’ve re-established a new Websocket connection.

Going even further: you could build a sneakernet RPC system that doesn’t even rely on the internet. Or one that distributes requests and responses via gossip protocol. This goes back to JSON-RPC 1.0’s peer-to-peer roots.

(Actually doing that would involve a ton of complexity which is entirely outside the JSON-RPC spec. E.g. deciding how to serialise client state so it knows what to do with responses after a restart. I’m not sure why you would use JSON-RPC for that instead of building a sync engine. But you could!)

Another possibility is having asymmetic transports. For example, a browser-based client could send requests via HTTP, but receive responses via a separate Server Sent Events channel. The server could make use of EventSource’s Last-Event-Id header to ensure replies aren’t dropped while the client reconnects.

JSON-RPC is a very simple spec. The id is actually the most important part of it. Without id you might as well just send JSON over HTTP any old way.


Somebody please clone Cloudflare Durable Objects

#

I’ve been looking into how to build a Figma-like realtime multiplayer editor. These two articles go into a lot of detail about the efforts Figma had to expend to achieve it:

https://www.figma.com/blog/rust-in-production-at-figma/

https://www.figma.com/blog/how-figmas-multiplayer-technology-works/

While I was working on a series of prototypes based on the posts above, Cloudflare dropped a new product: Durable Objects, an extension of their Workers platform.

https://blog.cloudflare.com/introducing-workers-durable-objects/

As Paul puts it in this piece about the Durable Objects advantage:

Durable Objects provide a way to reliably connect two clients to the same object. Each object has a (string) name that is used when connecting to it, which typically corresponds to a “room” or “document ID” in the application itself.

In practice, this means that two clients can establish WebSocket connections using the same document ID, and the infrastructure will guarantee that they are connected to the same Durable Object instance.

This is the difference that makes Durable Objects so compelling for multiplayer use cases, but it’s also the reason that the Durable Object abstraction hasn’t been slapped on top of other FaaS platforms – it is a radical departure from core assumptions made by their scheduling and routing systems.

https://digest.browsertech.com/archive/browsertech-digest-cloudflares-durable/

The Durable Objects killer feature is the global uniqueness of each object, and the ability to address the object from a known string.

For example, say I have a “customer” object in my database and I want to associate a realtime session with each customer, e.g. for the purpose of adding realtime updates to the CRM interface for that customer.

A client can send a request to Cloudflare asking:

connect me via Websocket to the Object identified by "session:customer_<id>"

and when two clients make that same request with the same <id> on opposite sides of the world, both their Websockets will end up connected to the same Durable Object instance somewhere in Cloudflare.

I’ve been wishing another cloud provider would build similar functionality, but as Paul notes, it’s not simple. While edge workers (especially JavaScript ones) are fairly commoditised now, building on Durable Objects is still significant vendor lock-in.

Jamsocket deserves an honorable mention for building open-source tooling to self-host something very similar:

https://jamsocket.com/


Thoughts on git commits, branches, PRs and stacking

#

At work, we’ve been trying out Graphite for the past year or so. It’s a complement to our GitHub account which helps us manage stacked PRs.

After trying the “stacking” workflow in the context of our small team, I wrote up some thoughts internally on how we use git. Hopefully, this will help us think more clearly about how to use our version control system to maximum effect. I figured I’d post my reflection publicly, too.

Primitives in version control

Commits

A commit is an atomic change to the codebase. It can touch multiple files, if a single logical change is physically distributed across the filesystem.

Branches

A branch refers to the “tip” of a linked list of commits. Merge commits join two branches into one, so that commits form an acyclic graph.

Pull requests

A pull request is a social construct built atop branches. It is a “request” from one developer to merge one branch into another. (And yes, I prefer GitLab’s “merge request” terminology, but unfortunately it’s the less used term.)

Stacks

Stacks are a collection of related PRs, often with sequential dependency between them. E.g. the PR that implements a frontend feature depends on the PR that implements the backend for the feature. These two PRs could be managed using a stacking tool like Graphite.

The stated purposes of stacking workflows are:

Outcomes

Thinking this through reminded me of the importance of commits themselves. I’d like to try harder to create exemplary commits for my team, which should flow on into better PRs, more helpful git blame, and hopefully a reduced need for stacking tools.


Relational

#

There seem to be a common misconception about the meaning of the term “relational” (in the context of “relational databases”).

There are a few commonly-used phrases:

One of these things is not like the others.

A classic demonstration of the misunderstanding:

Collectively, multiple tables of data are called relational data because it is the relations, not just the individual datasets, that are important. Relations are always defined between a pair of tables.

https://r4ds.had.co.nz/relational-data.html

The Drizzle ORM also gets this annoyingly wrong, although at least they didn’t bake the word relation into the library’s API surface like Eloquent.

A relation, properly defined, is closely analogous to a table. Relational data is just data stored in tables.

If somebody asks you “if your data is relational”, they may be suffering from this misunderstanding.


Lessons in object-oriented design

#

I will be re-reading the following quotes from Sandi Metz’s Practical Object-Oriented Design every morning before I start coding:

Object-oriented design is about managing dependencies. It is a set of coding techniques that arrange dependencies such that objects can tolerate change. In the absence of design, unmanaged dependencies wreak havoc because objects know too much about one another. Changing one object forces change upon its collaborators, which in turn forces change upon its collaborators, ad infinitum. A seemingly insignificant enhancement can cause damage that radiates outward in overlapping concentric circles, ultimately leaving no code untouched.

This makes sense of why dependency injection containers exist, the principle of inversion of control, and how to split up classes into the right responsibilities.

Asserting that code should be easy to change is akin to stating that children should be polite; the statement is impossible to disagree with, yet it in no way helps a parent raise an agreeable child. The idea of easy is too broad; you need concrete definitions of easiness and specific criteria by which to judge code. If you define easy to change as

  • changes have no unexpected side effects,
  • small changes in requirements require correspondingly small changes in code,
  • existing code is easy to reuse, and
  • the easiest way to make a change is to add code that in itself is easy to change,

then the code you write should have the following qualities. Code should be

  • Transparent The consequences of change should be obvious in the code that is changing and in distant code that relies upon it.
  • Reasonable The cost of any change should be proportional to the benefits the change achieves.
  • Usable Existing code should be usable in new and unexpected contexts.
  • Exemplary The code itself should encourage those who change it to perpetuate these qualities.

Code that is Transparent, Reasonable, Usable, and Exemplary (TRUE) not only meets today’s needs but can also be changed to meet the needs of the future. The first step in creating code that is TRUE is to ensure that each class has a single, well-defined responsibility.

Optimising for unknown future requirements seems like a great way to go about writing code that’s usable today as well.