It was June 28 of this year that I unboxed my new Framework laptop.
After 6 months of using it as my personal (non-work) machine, here are a few reflections.
The hardware is very nice.
I like the keyboard and screen, everything feels sturdy and usable.
I haven’t swapped out my expansion cards yet; 2x USB C, 1x USB A, and 1x 250GB expansion card are all I really need.
I did once do a talk at a meetup and forgot to bring my HDMI expansion port, resulting in some amusement from the crowd.
I don’t regret choosing Fedora instead of Ubuntu as the main OS.
While I’d always used Ubuntu in the past (either dual-booting Windows, via WSL, or the occasional bare install), Fedora is familiar enough and hasn’t prevented me using anything I need to for software development, browsing, and light document work.
Everything I’ve plugged in, from external monitors to keyboards etc., has worked fine. I don’t think this should be taken for granted!
I haven’t relied on its battery life, and it doesn’t seem fantastic.
Though, compared to my aging XPS15 which gets 1 hour of battery usage tops, it’s obviously an improvement.
I’ve taken it on some long train trips and used it for maybe 4-5 hours without exhausting the battery… I couldn’t spend 6 or 8 hours working a laptop in that kind of environment anyway, even if the battery were going to last that long.
About six weeks ago, I switched from GNOME as my desktop environment to KDE Plasma.
The main motivation was because there’s a native option for scaling that isn’t 100% or 200% (I have it set to 125%).
With the Framework 13’s screen resolution I find it quite important, as 100% is somewhat uncomfortably small.
I didn’t have the same problems with GNOME that seem to cause a lot of animosity towards it (here’s an example), but I am sympethetic - they just didn’t cause me any practical issues with my usage.
I found it a pleasantly-minimalistic environment with good defaults, and if it weren’t for the 125% scaling issue I’d probably not have bothered shopping around.
Something I didn’t appreciate until I installed Plasma was the way GNOME just handled my keyring for me automatically.
After putting up with with Slack prompting me to create a wallet every time I opened it, then when I selected the non-deprecated-sounding wallet creation option and hitting a wall because I didn’t have a GPG key, I finally decided to bite the bullet and work it out.
It took me far too much searching and reading old blog posts, forum threads, and help articles before discovering this guide to setting up kwallet with git ssh keys, which even then didn’t work unmodified as my main key is id_rsa not id_rsa_something.
Plasma by default seems to have a lot of cruft I don’t care about and had to trawl through settings to disable.
The most egregious example was the OSX(?) style bouncy app icons on my cursor which are pretty ugly.
Plasma doesn’t mount my external storage expansion card by default, which GNOME does, so sometimes when I boot up, downloads go into a broken state until I manually open the file browser, mount the drive, and restart them.
I assume I will be able to fix this after some more terminal and/or settings page spelunking.
So basically, I like Plasma well enough to keep using it, but it was a harsh reminder of the kinds of niggling Linux-y config issues that I had forgotten about.
It’s a more “traditional” looking desktop which I don’t prefer to GNOME’s minimalism, but it has comprehensive settings and good performance.
My question was half genuine confusion, and half Cunningham’s Law.
Encrypted casting is a feature in Laravel’s Eloquent ORM which allows us to very transparently encrypt a value in one database column of a model.
This encryption happens in the PHP code, using the global app key.
The encryption happens inside the ORM layer, so our app code doesn’t even need to know about it.
It felt so transparent, to me, that it may as well not exist.
Our entire database is already encrypted at rest, which protects against somebody walking out of the AWS datacenter with a hard drive.
Our app servers’ connections to the database are encrypted in-flight, so observers of the network can’t snoop on that traffic.
Beyond that, and of course the usual high standard of security that app servers are held to to prevent SQL injection or other vulnerabilities, I wasn’t seeing the benefit of encrypting specific colums more.
But the good folks of PHP Australia provided me with some great insight into what other threats this feature can mitigate.
The biggest one I overlooked was SQL injection.
If an app had an SQL injection vulnerability, an attacker might be able to perform arbitrary queries against the database, but not arbitrary code execution.
Therefore they’d be able to access unencrypted data, but not the encrypted columns.
Another good point was systems which aren’t our app which could access the same database.
For example, our business intelligence dashboard or other analytics systems.
Their trust model is different to that of our application; encryption which is transparent to our app and therefore seems “too easy”, is very opaque to these other systems!
Finally, our own staff could, even inadvertently, be a risk to the data.
Adding this extra layer of encryption of the most sensitive data protects against accidental disclosure, since access to the APP_KEY is extremely tightly controlled.
This might not protect us against bugs in the code, but could catch ops errors or other mistakes.
Thanks to Andrew, Pat, valorin, James and Samuel for helping me think through this.
I appreciate your patient explanations!
While the page’s content is interesting, for as long as it has been on the Wayback machine it has used the phrase “N+1 Queries Are Not A Problem With SQLite”:
The problem is typically caused by code that looks something like this:
// select * from posts...
$posts = Post::all();
foreach ($posts as $post) {
// select * from users where id = ...
$user = $post->author();
// use $user...
}
This code will perform a single database query returning n posts, then for each post it will run a query returning one user, for a total of n additional queries.
That’s 1 and then n round-trips to and from the database.
In the typical three-tier, remote database server setup that most webapps use, this is inefficient.
(Not so when using SQLite, which is the point of their article above.)
There are many ways to fix this kind of problem, depending on the framework or ORM you’re using.
Referring to this as an “n+1 problem” annoys me because the problem is that you do one query first, which is fine, then n queries afterwards, which are the problem.
This phrasing emphasizes the 1, which isn’t the problem!
I wouldn’t die on this molehill, but I use the phrase “1+n problem” instead and I hope it catches on.
Ursula Franklin was a scientist, author, activist and Quaker who wrote, among many things, on technology.
De-escalate speed, and also de-escalate vocabulary.
Much of our vocabulary comes, again, out of production and advertising.
I think we might make a pact with each other never to use the word “awesome”.
I think the world was fine without the word awesome.
De-escalate vocabulary, de-escalate hype.
I consider reflecting on the ways my vocabulary and attitudes are shaped by a consumerist and technological society to be part of my spiritual practise.
Hype and marketing lead us awayh from the truth in order to influence others.
I want to remember, when I am trying to be persuasive, to influence using truth, not exaggeration.
It’s good enough if things work.
Everything doesn’t have to be “splendid”, “cutting edge”, “world class”.
Those are responses to production.
See where those things come from.
It’s very poor practice to hurry on your plants in the winter; when we have fluorescent lights all our plants get spindly and look miserable.
There is nothing to speed per se.
Like efficiency, speed has a direction.
It is not its own virtue; it depends on towards what we are speeding.
And while there are many injustices we wish to speed away from, speed allows or necessitates us to miss details, to omit nuance, sometimes to fail to attend adequately to reality.
I’ve developed a strong opinion about exceptions over the last few years.
There are only two pieces of information that all exceptions should include:
Whose fault is the error? Was it you, the client? Or us, the provider?
How persistent is the error? Should you retry, and if so, should you retry in 10 seconds or 24 hours?
When the client and the provider are not on the same software team, these are the only two answers that determine how an error can be handled.
A client error can only be fixed by the client.
For example, a field that must be an email address can only be fixed by the person editing the input.
Bugs like null pointer exceptions can only be fixed by the developer of the service.
It’s useless the client retrying, as they will probably always hit the same code path and there’s nothing they can do about it.
In this case, they need to “retry after the next deployment”.
Some failures, like deadlocks or other infrastructural or network errors, are transient and the client may choose to retry in a matter of seconds (for interactive uses), or minutes for a queued, batched or other background process.
Webhook retries are a classic example; event delivery is often automatically retried over a period of hours and days.
Now, errors should include helpful details beyond just the answers to the questions above.
For example, if the email input in a form is incorrect, the response must tell the user exactly which field is wrong and why it is wrong.
But these details are pretty much always down to humans to interpret.
This leads me to the conclusion that there are only 5 necessary HTTP status codes:
200: no error
300: no error, but client must perform a redirect*
400: client did something wrong; fix your submission before retrying
500: developer did something wrong; find a workaround or try again next week
503: temporary failure, should also provide a Retry-After response header
*I’ll allow that the semantics of HTTP mean there probably needs to be more than 1 code for redirects.
Though I reckon the differences between temporary and permanent redirects etc. could have been expressed just as well with response headers.
I’ve framed this in terms of HTTP response codes, but the same issues apply to exceptions inside your own code.
When throwing an exception, ask yourself the questions: who needs to fix this, and when (if ever) should the client retry?
It was the third talk I’d given at the meetup this year.
The talk went well, despite me forgetting to bring the HDMI adapter port for my Framework laptop.
A few small things I forgot to say during the talk:
It can be tempting to try to fit all API endpoints into the JSON:API mold, but sometimes they just don’t fit well.
Like aggregates, charting, RPC calls like “merge these 3 entities”, basically anything outside CRUD.
We have come up with various workarounds, but sometimes we just bail and create a cheeky endpoint with a verb in the URL.
Mentioning some things we need to watch out for when preparing endpoints using Laravel, e.g. preventing 1+n query problems, or avoiding loading relationships entirely when they are excluded by sparse fieldsets.
I did want to include some Laravel code examples, but since I wrote the slides in about 30min the morning of, I just didn’t have time.
I also wanted to touch on how we’re working with Vue Query on the frontend and the way JSON:API’s entity-graph structure helps with that.
Someone asked afterwards about type and documentation generation, which I had deliberately excluded from the talk to keep it focused.
There seemed to be some interest in a follow-up talk on those aspects of our setup.
I was both relieved and very slightly disappointed nobody asked me afterwards about JSON Hyper-Schema (I guess they were just respecting my clearly-stated wishes).
On the subject of JSON:API, here are a few articles that have influenced me:
I’ve used JSON-RPC before as a simple request-response protocol for Websocket services.
It is a simple way to avoid bikeshedding.
I was recently looking into tRPC to compare and contrast, and my research led me to the JSON-RPC mailing list.
I discovered a lot of interesting ideas in there which you might not have appreciated if you’ve only used JSON-RPC in a superficial way, e.g. over HTTP to a server, or even over a Websocket like I was.
Often, someone would ask why they had to provide an id, since one HTTP request would always result in one HTTP response, so the id could just always be "1".
The replies highlight both the centrality of id to the spec, and the real meaning of JSON-RPC being transport-agnostic:
Ajax-type one-off requests make this hard to see, but the id is vital to
the client sorting out its many messages. It has little to do with the
server, and probably never should.
If the transport is handling the role the id currently fills, then
yes… but that also means the client using the transport needs to
understand that. If you a client makes 30 calls to 4 different
servers, having the ids allows a central tabulation of the current
call states. What you are suggesting would require that meaning to be
injected or stored in the currently running transport instances. This
does not work when you want to either freeze the client state, or have
requests that outlive the transport instances.
Exactly. HTTP GET has been an issue with json-rpc all the way through,
as HTTP GET is often misused… perhaps a thread specific to
addressing this angle of it will lead to some clarity for all of us on
how to move forward.
You keep linking the
request object to the transport instance life time and suggesting
optimizations based on that view. json-rpc is used in places where
that is not true and there is little gained from the “optimization”.
if the function being called has no meaningful return value, adding an id in the call merely forces the server to add a useless NULL response
The NULL (or “void”) response isn’t useless - it allows the client to
verify that the server received and processed the request. In some
transports like HTTP this may be duplicative of the HTTP response
which comes anyway, but in most other transports it is not.
So not only might we be using a transport like Websockets, which doesn’t have request-response semantics, we might actually make requests that outlive the lifetime of a transport.
An easy example is: what if your Websocket disconnects due to a temporary network outage.
If you’re used to an HTTP world, you might expect that all your in-flight requests must just be abandoned and retried.
But JSON-RPC allows you to keep ahold of those ids and match up responses you get once you’ve re-established a new Websocket connection.
Going even further: you could build a sneakernet RPC system that doesn’t even rely on the internet.
Or one that distributes requests and responses via gossip protocol.
This goes back to JSON-RPC 1.0’s peer-to-peer roots.
(Actually doing that would involve a ton of complexity which is entirely outside the JSON-RPC spec.
E.g. deciding how to serialise client state so it knows what to do with responses after a restart.
I’m not sure why you would use JSON-RPC for that instead of building a sync engine. But you could!)
Another possibility is having asymmetic transports.
For example, a browser-based client could send requests via HTTP, but receive responses via a separate Server Sent Events channel.
The server could make use of EventSource’s Last-Event-Id header to ensure replies aren’t dropped while the client reconnects.
JSON-RPC is a very simple spec.
The id is actually the most important part of it.
Without id you might as well just send JSON over HTTP any old way.
I’ve been looking into how to build a Figma-like realtime multiplayer editor.
These two articles go into a lot of detail about the efforts Figma had to expend to achieve it:
While I was working on a series of prototypes based on the posts above, Cloudflare dropped a new product: Durable Objects, an extension of their Workers platform.
As Paul puts it in this piece about the Durable Objects advantage:
Durable Objects provide a way to reliably connect two clients to the same object. Each object has a (string) name that is used when connecting to it, which typically corresponds to a “room” or “document ID” in the application itself.
In practice, this means that two clients can establish WebSocket connections using the same document ID, and the infrastructure will guarantee that they are connected to the same Durable Object instance.
This is the difference that makes Durable Objects so compelling for multiplayer use cases, but it’s also the reason that the Durable Object abstraction hasn’t been slapped on top of other FaaS platforms – it is a radical departure from core assumptions made by their scheduling and routing systems.
The Durable Objects killer feature is the global uniqueness of each object, and the ability to address the object from a known string.
For example, say I have a “customer” object in my database and I want to associate a realtime session with each customer, e.g. for the purpose of adding realtime updates to the CRM interface for that customer.
A client can send a request to Cloudflare asking:
connect me via Websocket to the Object identified by "session:customer_<id>"
and when two clients make that same request with the same <id> on opposite sides of the world, both their Websockets will end up connected to the same Durable Object instance somewhere in Cloudflare.
I’ve been wishing another cloud provider would build similar functionality, but as Paul notes, it’s not simple.
While edge workers (especially JavaScript ones) are fairly commoditised now, building on Durable Objects is still significant vendor lock-in.
Jamsocket deserves an honorable mention for building open-source tooling to self-host something very similar:
The epigraph to Jane Jacobs’ The Death and Life of Great American Cities reads:
“We are all very near despair.
The sheathing that floats us over its waves is compounded of hope, faith in the unexplainable worth and sure issue of effort, and the deep, sub-conscious content which comes from the exercise of our powers.”
– Oliver Wendell Holmes, Jr
I’m struck by how the swing from despair to contentment in the quote mirrors the journey of her book’s title, from death to life.
The quote resonates so much with me because of this fragment: “the unexplainable worth and sure issue of effort”.
I have a real sense that we humans are made to be creative, to strive, to exert our effort.
We aren’t made for drudgery, for rote work or degrading toil.
Our effort sustains us in a world that constantly challenges and threatens us.
It is how we care for each other.
This innate creativity can be twisted - into hustle culture, into propsperity gospels, into class warfare that convinces the poor that they must overwork themselves to enrich the owners.
But I, for my part, still feel most content when I have done hard work for a good cause.
I made this quote the epigraph for my own website because I want it to be a statement of intent, a reflection of what I value and why I write.
For the first time since 1868, Quakers are not meeting for worship on Devonshire Street in Sydney.
The increasing financial pressure of owning heritage-listed property in the City of Sydney, including regulatory changes and steeply increasing insurance costs, made it untenable for us to keep the property.
Many of us are sad to leave behind the history that happened in that building, to no longer continue meeting between the walls that housed so much community.
Many of us are looking forward to shedding the burden the property was putting on the meeting, financially and in the energy and effort that was needed to manage it.
We are now meeting in a light, pleasant room in Pilgrim House on Pitt street.
We have downsized.
A church can be anywhere two or more are gathered together.
At work, we’ve been trying out Graphite for the past year or so.
It’s a complement to our GitHub account which helps us manage stacked PRs.
After trying the “stacking” workflow in the context of our small team, I wrote up some thoughts internally on how we use git.
Hopefully, this will help us think more clearly about how to use our version control system to maximum effect.
I figured I’d post my reflection publicly, too.
Primitives in version control
Commits are mandatory; we can’t use git without commits.
Branches can let us have multiple “latest” commits in the repo. We use this usually to separate “work in progress” from “work that is integrated”. Anither use of branches is to distinguish “latest changes from the dev team” from “latest deployment”. Using more than one branch is not mandatory!
Pull requests are a social technology built on top of branches. They are a request for social interaction before a branch is merged with the trunk.
Stacking is another social technology which makes it convenient to maintain multiple related pull requests. (Maybe it’s a stretch to call stacks a “primitive”!)
Commits
A commit is an atomic change to the codebase. It can touch multiple files, if a single logical change is physically distributed across the filesystem.
For the benefit of future investigation using git blame, make sure a commit is coherent. It should have a single purpose. One change should be completely implemented in one commit, and each commit should implement only one change.
For the benefit of future debugging using git bisect, make sure a commit doesn’t break the app. If the app is in a broken state after a commit, then we will not be able to run the app to determine if the commit introduced a bug.
Conventional commits introduces the idea that you can use the commit message for more things in the future: e.g. automating a changelog. This adds more meaning to the commit message itself, which is why it must follow a specific format.
Branches
A branch refers to the “tip” of a linked list of commits. Merge commits join two branches into one, so that commits form an acyclic graph.
Branching away from the trunk allows us to sequester away “work in progress” so that developers can avoid stepping on each others’ toes. We prefer to do this rather than to do all work in the main branch, though some teams have the opposite preference.
Branches can be a point of integration with other tools; for example, we use Linear which can automatically link issues to branches with particular naming conventions.
“Work in progress” branches often end up looking like a raw history of ad-hoc editd, rather than a curated sequence of atomic and coherent commits. This is because the branch is “hidden away” and the consequences of messy commits in a branch are zero… until it is merged.
Pull requests
A pull request is a social construct built atop branches.
It is a “request” from one developer to merge one branch into another.
(And yes, I prefer GitLab’s “merge request” terminology, but unfortunately it’s the less used term.)
It is assumed that the requester and the merger will be different people, though solo developers may also use a PR based workflow too…
Because PRs provide another point of integration for things like:
Code review, these days including code review performed by tools ranging from linters to security scanners to LLMs.
Automated testing, especially when the full test suite is too heavy to run on every single commit pushed to the VCS
Preview deployments (though we are not doing this)
PRs provide an opportunity to add more human-focused details to a branch, e.g. screenshots of what the changes look like, discussion threads, approval processes, including e.g. for certifications like SOC2.
Because of these social benefits, a PR might only contain a single commit. But our PRs usually come from a branch with many commits over multiple days of work. As noted above, our branches are often full of “work in progress” commits which aren’t cohesive.
However, a PR can be a great opportunity to review your own work and rebase the commits to become more coherent once the work is “complete” enough to be merged. Rebasing a series of 10 wip commits into 4 coherent changes is a gift to the future of the codebase.
Stacks
Stacks are a collection of related PRs, often with sequential dependency between them.
E.g. the PR that implements a frontend feature depends on the PR that implements the backend for the feature. These two PRs could be managed using a stacking tool like Graphite.
The stated purposes of stacking workflows are:
Preventing developers from being “blocked on review”, allowing them to keep working on top of a pull request while it’s still not merged. This is a technical solution to the social problem of slow/asynchronous PR reviews.
Make it easier to work on related branches while keeping them all up-to-date with each other. This is a technical improvement to git - the same approach could be done manually with git commands, it would just be more annoying.
Outcomes
We have noticed a tendency for stacks to grow and accumulate more and more unmerged related PRs. This may be because we often work on fairly large features which need to be broken up into many parts. Our PR reviews being slow due to the team being small and busy also make it attractive to “move on” to another PR on the stack.
We don’t often rebase or clean up our branches when creating a PR. This can exacerbate problems further downstream, e.g. making it harder to create cleanly-separated stacks.
We’ve adopted the “conventional commits” format in our commit messages, but we aren’t yet doing anything with the commit messages. This has led to some looseness in the way we use our commit tags, as we can select any commit type and it doesn’t affect anything.
Thinking this through reminded me of the importance of commits themselves. I’d like to try harder to create exemplary commits for my team, which should flow on into better PRs, more helpful git blame, and hopefully a reduced need for stacking tools.
Wright designed the home above the waterfall: Kaufmann had expected it to be below the falls to afford a view of the cascades.
It has been said that he was initially very upset with this change.
Contrast this to Chrisopher Alexander’s advice in A Pattern Language 104, “Site Repair”
Buildings must always be built on those parts of the land which are in the worst condition, not the best.
This idea is indeed very simple. But it is the exact opposite of what usually happens; and it takes enormous will power to follow it through.
Building the house directly on the waterfall makes for a stunning piece of architecture, and the house’s fame was surely enhanced because of it.
Maybe the Kaufmans came to enjoy the unexpected placement of the house.
But it’s a reason I sometimes don’t describe Christopher Alexander as an architect.
While I enjoy his buildings, what is really special about them is their relationship with their surroundings and, in a campus setting, with each other.
It was his planning, his integration not just of physical spaces but of people and systems, that made his work so valuable.
Consider the site and its buildings as a single living eco-system.
Leave those areas that are the most precious, beautiful, comfortable, and healthy as they are, and build new structures in those parts of the site which are least pleasant now.
I wonder… in what other parts of life do we rush to consume beauty, rather than appreciate and nurture it?
Monderman’s philosophy, popularly called “shared space,” as coined by the English urban designer Ben Hamilton-Baillie, has been implemented in cities around the world. It seems to be working. Instead of causing chaos and collisions, the “red-light-removal schemes” almost always result in improved sociability and traffic flow, and fewer accidents in some cases. A study of center-line removal in Wiltshire, U.K., for instance, found that people drove more safely without the markings and the number of accidents decreased by 35 percent.
One of the things I like about Bluesky is its use of an open protocol, which reduces the chances that it will follow the fate that befell Twitter. I’ve set my domain for Bluesky, so hopefully even if bad things happen to Bluesky itself, the AT protocol will live on.
During the count of votes for and against the bill, Te Pāti Māori MP Hana-Rawhiti Maipi-Clarke stood up from her seat starting a haka directed at Seymour, which her colleagues and MPs from the Greens and Labour joined. When people in the public gallery above joined in loudly, Speaker Gerry Brownlee suspended Parliament for an hour until the gallery was cleared.
Double loading a sidewalk is when you put amenities or features on both sides of the pedestrian walkway, such as outdoor seating, street trees, kiosks, and dining sheds. This makes the walkway feel like a kind of “safe zone” drawing people in large numbers to gather and enjoy the stretches where everyone feels safe. This leads to a much more enjoyable, safer, and more relaxed experience than walking right alongside traffic. It turns the sidewalk from an afterthought of the street’s design into the main attraction.
Deus Ex (Ion Storm Austin, released June 23, 2000) is often considered one of the greatest games of all time. Its story is centered around real world conspiracy theories in the year 2052, which informs an aesthetic characterized as near future cyberpunk. The setting combines and contrasts recognizable real-world architecture and set dressing with futuristic Sci-Fi technology.
To put it very simply, the little things matter. The sandwiches that get sent to hospitals matter. Ritalin supply chains matter, lumbar support in chairs matter, and yes, stupid React widgets matter. They go out into society, and every time someone says “Ah, I just want to get paid”, we get another terrible intersection that haunts the community for five generations. I’m going to stay angry about bad software engineering.
The researchers found that honoki, a kind of magnolia tree native in Japan and traditionally used for sword sheaths, is most suited for spacecraft, after a 10-month experiment aboard the International Space Station. LignoSat is made of honoki, using a traditional Japanese crafts technique without screws or glue. Once deployed, LignoSat will stay in orbit for six months, with the electronic components onboard measuring how wood endures the extreme environment of space, where temperatures fluctuate from -100 to 100 degrees Celsius every 45 minutes as it orbits from darkness to sunlight.
Getting there had required relentless organizing, and fundraising, as well as reassuring skeptical rideshare drivers, who doubted that a worker-owned co-op could challenge the Uber-Lyft duopoly – which controls 98% of the U.S. rideshare market. It is also a software-engineering feat: Theirs is the first driver-owned rideshare app in the US to offer both on-demand and pre-scheduled rides. There were also legislative hurdles, such as the law that required “transportation network companies” to pay an annual permit fee of $111,250 to the Colorado Public Utilities Commission.
Zulip is a team chat app, broadly similar to Slack or Microsoft Teams.
When the Future of Coding community started considering Slack alternatives earlier this year, I was reminded that Zulip exists.
They didn’t end up moving to it, but I got interested.
I am in several Slacks, for my day job and a couple of community groups.
The difference when I open Zulip is noticeable.
Zulip has a snappier and denser UI.
But it’s the way messages are grouped into topics that makes it really easy to ignore a bunch of messages that I don’t need to read.
That makes it much faster to get a broad sense of “what’s happening” and then focus on chats I actually care about.
On a personal level, I wish groups with ideological commitments would consider supporting Zulip, a small independent company funded by its customers whose owners are intimately involved in building the product.
Rather than Slack (sold to Salesforce) or Discord (raised hundreds of millions of dollars).
I really hope to see more communities and companies considering Zulip as a viable alternative to the corporate players.
I found out about Paradicms when searching for advice about working with RDF data in JavaScript.
(The name appears to be a combination of “paradigms” and “CMS”.)
The pitch is that they’re building a CMS for digital humanities projects like archives and museums in a minimal computing way.
Data is stored in non-coder-friendly systems like spreadsheets.
The final result is a static site, with no server to host or babysit.
The same data can be built into different site layouts depending on the project’s needs, and Paradicms provides several templates.
RDF comes in between the data sources (e.g. a spreadsheet) and the static site generation.
Spreadsheets in a conventional format are pulled into a linked data graph, which is then queried to create the data on each page.
Critically, this happens at build time, not at page load time.
I like:
Using RDF as an “integration” or “enrichment” layer between different data sources. It feels like all data models eventually become RDF anyway, when they are asked to accommodate enough use cases.
The focus on minimal computing, avoiding complexity for non-technical users.
We will truly never escape spreadsheets, and this approach leans into this truth and meets users where they are, rather than attempting to reinvent data management.
I worry about:
The UX of editing linked data in spreadsheets is… not good. I’m not yet convinced “real people” can work with this. A significant issue is having to pick IDs from dropdowns.
Is static site generation from RDF “simple enough” or is this where all the complexity moves to? The project provides some templated site types based on their digital collections data model, but what if I wanted to build something different?
It uses Python instead of JavaScript, the universal scripting language of the web. (Yes, this is a silly nitpick and slightly tongue-in-cheek.)
I wish RDF were more popular as a concept, with apps providing schemas/grammars for their data export.
I’d like to try out the Paradicms approach on some small projects for local groups I’m part of that aren’t technical.
I hope this could be a way to responsibly deliver a small data-backed site or app to a group that aren’t coders.
This is me upholding the web developer meme of rebuilding my personal site, and writing about rebuilding my personal site!
My main motivation was to add a micro blog with tags, so I’d be incentivised to write short posts instead of long ones.
But along the way I did achieve some secondary goals too:
Try out Eleventy, which I had heard good things about. I had previously used Jekyll and Hugo.
Simplify to a brutalist high-performance, low-maintenance design.
I spent way too long thinking about URLs.
Previously, it was just:
/
→ homepage
/blog
→ list of posts
/blog/:post-slug
→ a single post
I wanted to separate my “long reads” which are less time-oriented, from my new microblog which is more timely.
So I ended up with:
/
→ homepage
/:article-slug
→ just straight into an article
/:year
→ full content of microblog posts in year
/:year/:post-slug
→ a single post which happened in year
I like this because:
Knocking the slug off a blog post still results in a valid page
Each year page has a bounded and reasonable size, and you can read all posts without clicking in
Long reads don’t have a year in the URL, which differentiates them from the off-the-cuff posts
I also added tag pages at /topic/:tag-slug so I can share the whole story of a single topic on one page.
Hosting:
I stayed with CloudFlare Pages because of their green hosting and very snappy deployment pipeline.
Migrating content to 11ty:
Getting my existing blog content into Eleventy was pretty straightforward.
There were a few small hiccups, like configuring a different markdown processor to linkify my headings correctly.
Working with Eleventy’s content system was different to the very post-centric workflow I was used to.
It seems good though!
I’m interested to see if I can bend it to more shapes, like adding book reviews and a sketchbook, like Tom MacWright’s site.
Using my domain as my Bluesky handle (not super relevant but cool)
Add topics as Atom <category> tags
Thoughts so far:
Learning Nunjucks to create HTML templates was… let me just say, it’s been a long time since I used a templating engine that wasn’t Blade, and I much prefer it when they just let you embed the host language instead of coming up with their own expression language.
“Filters” for everything feels very 2005.
I think I’ve abused the content system frightfully to create a collection of years to generate my blog post year pages from.
That may need to be revised…
or, I could just write more blog posts and not worry about it!
After trying to write long-form content for a while back in 2019-20 and realising that a) I now don’t have the time for it, and b) it’s hard to stay motivated for the length of an entire piece, I’ve decided to revamp my website and focus on building up a short-form blog.
As I get increasingly involved in local housing groups, my church, and hopefully doing more speaking in the local PHP and JS communities, I want to start writing up my thoughts and learning publicly.
I’m really excited to go Linux full-time with a just-arrived Framework 13" running Fedora.
The experience has been fantastic so far.
DIY setup, software installation, customisation, wifi, external screens, Flathub apps, everything Just Works.
This is definitely my year of Linux On The Desktop.
I did have one assembly hiccup which felt scary.
An extra magnet had snuck in on top of one that was supposed to be there, so the input surface (keyboard+touchpad) didn’t fit on properly.
I didn’t realise what the raised component was for a while, and was getting set up to contact support, ask the community etc.
But fortunately I realised and just prised the extra magnet off then everything was perfect.
I guess that’s the kind of thing you sign up for when getting a DIY assembly version.
I’m just glad it wasn’t an actual problem as I was so keen to get started using it.
And let me tell you, after Windows’ evolution in recent years, using an open-source operating system which hasn’t been enshittified to death by megacorp product managers just feels… so clean.
Yearly Meeting is the annual gathering of all Australian Quakers.
This was my first time at an in-person Yearly Meeting, having only started to attend Quaker meetings in late 2019. And I was so glad that I decided to come!
I deeply appreciated experiencing the formal sessions and seeing how the Society conducts its business.
This, on paper, was something that had attracted me to Quakerism as I was just learning about it.
But being in a large room full of Friends patiently expressing themselves and building up in the spirit of unity really helped me understand it deeply.
Even though I was only able to attend the final Thursday, Friday and Saturday, I was overwhelmed in the best possible way.
The chance to meet Friends from other cities and states was a blessing.
Our community may be small, but its vitality and vivaciousness were evident!
I got the sense that many others were relieved to be meeting in person again after years of online Meetings.
I am very much looking forward to the next time we can do so.
There seem to be a common misconception about the meaning of the term “relational” (in the context of “relational databases”).
There are a few commonly-used phrases:
“Relational data”
“Relational database”
“Relation”
“Relationship”
One of these things is not like the others.
A classic demonstration of the misunderstanding:
Collectively, multiple tables of data are called relational data because it is the relations, not just the individual datasets, that are important. Relations are always defined between a pair of tables.
Many people in tech are obsessed with exponential growth.
But I wonder…
Are things which look like exponential growth always the spread of an idea between humans?
Are they always ideas whose implementation can be easily parallelized or distributed, so that as the idea spreads, the growth of its implementation is not bottlenecked?
Where the growth of an idea was not exponential, was that because its implementation couldn’t proceed in a decentralised way?
Was it because the material resources needed for its implementation weren’t already widely available, ready to be put to use?
Is the end of exponential growth (the top of the S curve) due to saturation of the idea among humans?