{ "version": "https://jsonfeed.org/version/1", "title": "paulsnar's blog", "home_page_url": "https://pn.id.lv/blog/", "feed_url": "https://pn.id.lv/blog/feed.json", "author": {"name": "paulsnar", "url": "https://pn.id.lv/"}, "items": [{ "id": "https://pn.id.lv/blog/2021/01/on-open-source#13", "url": "https://pn.id.lv/blog/2021/01/on-open-source", "title": "On the tyranny of defining “open source”", "date_published": "2021-01-22T21:12:10+00:00", "date_updated": "2021-01-22T22:41:54+00:00", "content_html": "
I am not a lawyer and therefore am possibly not qualified to discuss the legal\npeculiars presented herein. Everything in this post is mere opinion and should\nnot be treated as legal or other advice.
\nRecently, there’s been a bit of a kerfuffle around Elasticsearch. It started by\nthem announcing that they’re relicensing the code under the\nServer Side Public License, instead of the Apache 2.0 License it was\nunder beforehand. They have the full power to do this since contributions to\nElasticsearch were submitted under CLA that transferred their copyright to\nElasticsearch BV.
\nThere are many who are vocal critics of the SSPL, the foremost being the Open\nSource Initiative. The aforelinked post, however, argues that SSPL is\nnot open source on the basis of the OSD. Which, in fact, is not quite\ntrue—the SSPL provides complete and total freedom to run software licensed\nthereunder, provided that the infrastructure software is also made open\nsource. [1] This means that no freedoms are taken away; SSPL instead makes the\nlicense “viral” the same way GPL is, except by extending the virality to\ninfrastructure software. In a way, SSPL is just GPL, [2] turned up to 11 and\nadjusted for today’s SaaS-heavy world.
Now, sure, one could argue whether this “virality” isn’t actually a restriction\nof use, for perhaps some software running inside your favourite SaaS is actually\nproprietary and not owned by you, and therefore cannot be made open-source. I’ll\njust have to defer to the precedent set by the Linux kernel\nthat’s been found in many, many consumer devices, especially wireless APs, and\nwhich, no doubt, has had previously proprietary drivers attached that were made\nopen source by the virtue of the GPL. SSPL’s wording might also be in partial\nconflict with OSD’s point 9, but software such as Elasticsearch isn’t really\n“distributed along other software” so I believe it doesn’t apply either.
\nBut this is not what I’m here to argue—I’ll defer to Kyle Mitchell, who is in\nfact an actual lawyer and has hashed over the license\nparticulars already.
\nWhen I first learned about open source, it was in the context of freedom. It was\nan ideal: the Four Essential Freedoms, the ability for anyone to use and hack on\nany software anywhere. But this is a colloquialisation; in fact, “open source”\nand “free software” (which I’ll hereafter call “libre software” to disambiguate\nthe “libre” and “gratis” meanings) are nowhere near synonyms, despite how\nthey’re often used.
\nLibre software, as espoused by the Free Software Foundation and Richard\nStallman, indeed is this ideal; it’s the antithesis of software ownership, and\nthe embodiment of the slogan that “information wants to be free,” if we were to\ntreat software as a form of information. And that one I still find a noble goal\nto work towards, although itself not without its problems.
\nOpen source, meanwhile, is a different beast entirely. I’ve heard it described\nas watered down and more business friendly interpretation of libre software, and\nover time it’s proving to be more and more true. [3] Note that the megacorps\nthat are in any measure engaged with the wider developer community always use\nthe term open source, and never (or very rarely) speak of libre software.
The fact is that there is a difference of intent. Libre software is a\nphilosophy that’s in staunch opposition to capitalism, and the vast majority of\nthese corporations operate in a capitalistic framework, by which it means that\ntheir primary goal, at some level or another, is to make money. Most of the\nactions of most corporations can be viewed through this lens: does this help us\nmake money or does this make it more difficult for us to make money? In this way\nthey are lawnmowers—they do not have empathy; they have only goals and\nwill work towards those goals within a certain framework.
\nAdhering to the principles of libre software would require these corporations to\ngive up the notion that software is their trade secret, and\ninstead give the software away. Unfortunately, that maps poorly onto the\nsoftware that is integrally important to these corporations, such as Google’s\nmonorepo, or Windows for Microsoft.
\nOpen source, meanwhile, is less defined by ideology and more defined by utility.\nOpen source allows for a corporation to selectively choose what to share with\nthe world, how to share it, and usually place few restrictions on usage in\nclosed scenarios (which would be unthinkable for libre software—why would you\never do anything with software that isn’t open!?) Yet at that, exactly what gets\nopen sourced is subject to deliberation, and the potential benefits and\ndrawbacks are weighed carefully.
\nFor an example, let’s take Google. Both Kubernetes and Bazel grew out of\nGoogle-internal projects, but they became open sourced. My speculation about the\nreason for that is that this not only brings Google positive publicity, but also\nattains them mindshare within the developer community. After all, Kubernetes has\nbecome a de-facto standard for server software deployment, and that’s pretty\ngreat for Google which continues to hire multiple\nthousands of employees every year.
\nI suppose what I’m saying is that corporations don’t tend to make their software\nopen source on a whim or with the sole intention being goodwill; every release\nis deliberated, and there is a reason why any one thing should or should not\nbecome open source. I myself have fallen into the trap of reducing this process\ndown to a simple “they did it for the Greater Good,” which is not necessarily\nsingularly true. As with many things in tech, it’s more nuanced than that.
\nIn this light, it is worth asking the question of whether open source is even a\nworthy goal to strive towards for its own sake. Are you imparting value unto the\nworld by having your code be available to everyone? Are you\ntrying to smash the etairecracy by creating something that’s owned by the\ncommunity instead of a megacorporation? Or are you just doing it out of the\ngoodness of your heart, or because of your principles?
\nWhile I understand the heartbreak sentiment going on about it, I\nfully respect the choice of Elasticsearch BV (and MongoDB folks, for that\nmatter) to relicense their software. The recent fights about what exactly is or\nisn’t “open source” has left the term defined less precisely than it ever has\nbeen, and therefore it’s no longer a label that has much value left on its own\nbehalf.
\nI believe that it’s better to just treat the license of any software on its own\nterms. It is a list of rules, and what those rules prescribe needs not to fit\nwithin some sort of framework. Whether it fits with the values of libre software\nor with the Open Source Definition is good to know, but a license being “open\nsource” does not necessarily correspond to a useful set of values to be judged\nby. After all, BSD and GPL represent polar opposites of the “FOSS” spectrum, and\ngrouping them under the same banner is kind of not particularly useful. [4]
Also, as a community, I feel like we tend to throw the term “FOSS” around\nwithout considering whether it’s right to do so. Libre software and open source\nsoftware are fundamentally different; just because any project satisfies both\ndefinitions doesn’t mean its values and goals align with either, and by that\nmeasure we shouldn’t apply this label without merit and intention.
\n" }, { "id": "https://pn.id.lv/blog/2020/08/fixing-profanity-builds-on-homebrew-with-non-default-home#12", "url": "https://pn.id.lv/blog/2020/08/fixing-profanity-builds-on-homebrew-with-non-default-home", "title": "Fixing Profanity builds on Homebrew with non-default home", "date_published": "2020-08-21T20:45:53+00:00", "date_updated": "2020-08-21T20:59:29+00:00", "content_html": "TL;DR: brew install https://guro.paulsnar.lv/x/2020/08/profanity-0.9.5-readlinepatch.rb
, or apply the following patch to the stock recipe via patch -p 1
:
--- a/profanity.rb\n+++ b/profanity.rb\n@@ -37,2 +37,3 @@\n system "./bootstrap.sh" if build.head?\n+ inreplace "configure", "/usr/local/opt/readline", Formula["readline"].opt_prefix\n system "./configure", "--disable-dependency-tracking",\n
\nNote: The recipe provided herein is for Profanity v0.9.5 and must be adapted for any other release, though the same steps apply.
\nAs far as software goes, I’m one of the weirder users you’ll find around. For instance, I deliberately don’t use the default directory (/usr/local
) for Homebrew because I don’t want to install stuff globally – instead I use the less supported but still legitimate approach of just putting it anywhere.
While it comes with its downsides (because some pre-built bottles assume that the installation directory is /usr/local
and therefore must be re-built from source if they’re to be installed elsewhere), for most cases it’s not a problem. That is, until I tried installing Profanity for some testing recently.
The configure script fails thusly:
\nLast 15 lines from /Users/paulsnar/Library/Logs/Homebrew/profanity/01.configure:\nchecking pkg-config is at least version 0.9.0... yes\nchecking for libmesode... no\nchecking for libstrophe... yes\nchecking whether libstrophe works... yes\nchecking for ncursesw... yes\nchecking for wget_wch support in ncursesw... yes\nchecking for glib... yes\nchecking for gio... yes\nchecking for library containing fmod... none required\nchecking for curl... yes\nchecking for SQLITE... yes\nchecking for GTK... no\nconfigure: gtk+-2.0 not found, icons and clipboard not enabled\nchecking for /usr/local/opt/readline/lib... no\nconfigure: error: libreadline is required for profanity\n
\nIndeed, the last two lines might seem suspicious: why is configure
looking for libreadline in /usr/local/opt
? That seems like a bad assumption on their part.
And, indeed, they do look for readline wrongly on macOS. The proper fix would be to just use the readline provided by the environment while also allowing to specify other locations, and Homebrew could just do that, like they already do for, e.g., PHP.
\nI wanted to wrap up this change for upstream but I couldn’t manage to get Autoconf working after spending about four hours on it. [1] Hence, there’s an alternative dirty fix—within the Homebrew build recipe, just replace the offending location with the correct one right before running ./configure
:
def install\n system "./bootstrap.sh" if build.head?\n inreplace "configure", "/usr/local/opt/readline", Formula["readline"].opt_prefix\n system "./configure", "--disable-dependency-tracking",\n "--disable-silent-rules",\n "--prefix=#{prefix}"\n system "make", "install"\n end\n
\nThe significant change here is the addition of the inreplace
line.
After configure
is prepared, we just go in and outright replace the hardcoded reference to /usr/local/opt/readline
with the one that Homebrew has set. And that works!
It’s a dirty fix, and it definitely doesn’t belong within Homebrew/core, but I’d appreciate if anybody let Profanity folk know that their Autoconf setup could use some improvement.
\n" }, { "id": "https://pn.id.lv/blog/2020/07/naivete-of-programming-i18n#11", "url": "https://pn.id.lv/blog/2020/07/naivete-of-programming-i18n", "title": "The naïveté of programming language internationalization", "date_published": "2020-07-19T07:10:14+00:00", "date_updated": "2020-08-15T16:39:12+00:00", "content_html": "As a speaker of both English and a niche European language which has many interesting and archaic features that have eroded in other languages, I have my fair skepticism of internationalization efforts that amount to simple string replacement, especially after I’ve tried to do i18n right, which took some effort.
\nCitrine is a programming language intended to be translatable and portable across supported human languages in order to both ease programming for non-English-speakers and allow international teams to work together more effectively. It is a noble goal, and I do like the way that they’re bringing into light many of the concepts of Smalltalk, which is one of my favourite non-mainstream languages that I have also never worked with.
\nUnfortunately, their translation system seems not much more than basic string replacement. That is simpler to implement from a programming standpoint, I’ll grant them that, but string substitution works well only when the target language has a feature set that is near-identical to the source one.
\nThere are many ways such an assumption could break down. Simplest of them is word order – forcing subject verb: object.
as the canonical ordering could be awkward in languages with different ones.
A more complex example are fusional languages, of which Latvian is one, where the same word (usually a noun or verb) can take on one of a multitude of suffixes that convey its properties and meaning within the sentence. English is also fusional to an extent, but that extent is low and contained, and, importantly, doesn’t extend to nouns; in languages with grammatical case distinguished by declension, using the nominative for variables might be unnatural and weird, and string replacement can’t really do different declensions.[1]
Meanwhile, on HN I saw a couple of comments noting that the translation was simplistic and didn’t account for differences between languages that might cause problems in applying the translation, like assuming prepositions exist in the target language and work similarly to English.
\nThough another comment pointed out that Citrine isn’t attempting natural language processing, one of its goals is adoptability by non-English speakers. Pushing assumptions baked into English to other languages where they don’t hold true seems suboptimal to this goal.
\nTranslating any one piece of text takes a mind of two – both a complete understanding of the text and all of its nuance in its original language, and a knowledge and mastery of the foreign language to be able to reproduce it faithfully. Programming languages are no different; just because they use words from one language, doesn’t mean that you can just swap out these words for the same ones in a different language,[2] especially if the target language has no such equivalent word, or its meaning is overloaded in the source.
Actually, this makes me think that languages of the APL family might be far more universal in this regard. Their reliance on symbols instead of words obviates the need for translation among human languages, just an understanding of a symbolic one.
\nThis relatedly linked thread also lists a couple of interesting approaches to localized programming languages.
\n" }, { "id": "https://pn.id.lv/blog/2020/01/federation-is-not-a-silver-bullet#10", "url": "https://pn.id.lv/blog/2020/01/federation-is-not-a-silver-bullet", "title": "Federation is not a silver bullet", "date_published": "2020-01-25T19:20:19+00:00", "date_updated": "2020-01-25T19:50:53+00:00", "content_html": "So Byte has finally launched. It’s one of the few social networks I was looking forward to as it has promise to be a nice place and a positive medium for creators. Vine itself was a boon for creativity due to its constraints, and Dom has definitely put in a lot of care into Byte and it shows. [1]
But on my local Mastodon I noticed a toot that mentioned that, while Byte is a nice app, it isn’t federated so the particular user won’t seemingly be using it. I understand the sentiment but it rubbed me the wrong way so I felt like I need to write a bit of a rebuttal.
\nWe, the tech geeks, have a particular giddy for using stuff that is technologically superior as compared to what’s mainstream. [2] This includes OSes, messaging services, and yes, social networks. [3] But not everything should be federated, especially if it isn’t built into the network from day one.
The main reason, and I know it isn’t technical per se and hence will seem void for many, is that federated networks thus far haven’t been able to present a compelling user story. Federation will always suffer the problem of having the extra step of choosing the server where you want to host your content, which doesn’t fit well into the modern app paradigm of “open it and you’re done.” While sure, you could use a default server or just follow a friend and join the server of their choice, it’s still more cumbersome than what larger social networks offer now.
\nThe thing is, few of the things tech geeks use have particularly great UX. Mastodon might be close to being an exception, but after having used Byte for just a couple of minutes, its UX is great, the onboarding was stellar. As mentioned before, Byte’s creators have really put in a lot of care to make an app that’s not just great to use, but pleasant and frictionless. Most technologies that geeks use actually introduce more friction as opposed to most other solutions, which goes the exact opposite way. [4]
Byte is also intending to create a system for rewarding creators for their content and that definitely seems much easier to do with a centralized service than a federated one.
\nAlso, federated networks, due to the inherent limitations present in federation, [5] can’t have discovery tools that are as good as those of centralized platforms. The main way centralized platforms can provide good recommendations [6] is datamining, which is against the ethos of many federated open-source platforms, outside of their abilities and often impossible due to not having a centralized way to process enough data.
Federated networks do have their solutions to discovery (namely, Mastodon attempts to widen the sphere of availability to all the instances that the current one federates with, and even presents an unified timeline firehose of all available toots), but their quality is far from being as good.
So there’s my two cents. I doubt I’ll become a Byte addict but I’ll check in occasionally, notwithstanding the fact that it’s centralized. It’s a nice space on the Internet and I hope it remains that way, no matter what the nerds say.
\n" }, { "id": "https://pn.id.lv/blog/2019/11/nb#9", "url": "https://pn.id.lv/blog/2019/11/nb", "title": "NB", "date_published": "2019-11-13T15:27:08+00:00", "date_updated": "2019-11-13T15:27:51+00:00", "content_html": "After writing more than 30 posts on my short form blog, I think I’ve reached my peace with how far I took b3, for its paradigm probably wouldn’t fit microblogging (or tumblelogging, if you fancy) particularly well, and Baudot[1] works just fine for what I need, and is quite easy to iterate upon due to its small codebase.
Besides I don’t blog much anyway, which also makes me feel sorry for not having anything more worthwhile to express here. Hopefully this at least answers some questions about what I’m doing now.
\nPosts on here will continue, but only whenever I find something to say. My recent opinion dump on Superliminal came the closest to being worth a blog post of its own but the tumblelog prevailed and I trimmed my thoughts down. It was barely not long enough to feel at home here, but just slightly more verbose than usual over there.
\nHave a nice day.
\n" }, { "id": "https://pn.id.lv/blog/2019/10/on-bodging#8", "url": "https://pn.id.lv/blog/2019/10/on-bodging", "title": "On Bodging", "date_published": "2019-10-19T14:55:39+00:00", "date_updated": "2019-10-19T17:18:18+00:00", "content_html": "Recently, I’ve put together two tiny projects mostly for personal usage.
\nUsually I strive for my projects to uphold good code practices as far as possible. I find myself a bit of a perfectionist and I want my code to be a joy to work on, both for me and for others.
\nBut that’s hard. As I alluded to in the previous post on here, it seems that I can’t hold focus on working on a single project for too long.
\nI’ve been wondering how to alleviate this. Perhaps the aforementioned idealism is at fault.
\nRecently, a friend asked about what has informed my philosophy on programming and computing in general. Even though I couldn’t exactly give a list of citations,[1] it made me think about what my philosophy even is, and to put it down in concrete terms. So I’ll attempt listing my opinions here, as they apply to software I create.
The language both of the aforementioned projects were written in, for multiple reasons, was PHP. Now, as related to the last fact outlined above, the paradigm that PHP supposedly supports best seems to be mostly object-oriented programming, with some imperative programming mixed in.
\nOOP is likely the most popular programming paradigm in use today. Programming languages such as Java which support and encourage or even force OOP are among the most popular.
\nMy experience with OOP has been a mixed bag at best. I believe that I know OOP principles well enough insofar they’re applied in practice, and I’ve both contributed to a few projects and written plenty of my own following best practices as I could.
\nHowever, OOP can be taken to an extreme, and it often is. OOP encourages thinking about the environment and the world of the program as composed exclusively of other OOP primitives—mostly, objects—and boxing everything else into OOP categories so no trace of other styles remains.
\nThis seems a bit disparaging when it’s put this way, but indeed creating a pure OOP design seems like catnip to some minds. I’ve been struck with this idea multiple times, taking object-oriented principles as far as possible, creating an immaculate system of objects that only talk to other objects through well-defined interfaces. The project was probably perfectly testable and components were fiercly decoupled.
\nYet I can’t say I was happy with it afterwards. Sure, it might seem fun creating all the scaffolding around the fundamentally imperative idea of sequential commands mutating data to make it seem more immaculate in an object-oriented world, but working on such code afterwards I find quite hard.
\nPerhaps it’s just confirmation bias. Until now I haven’t really tried stepping out of the OOP zone for my web projects in particular, even though I’ve done some others in the meantime. But I do feel strongly that the OOP paradigm tends to be taken to an unusual extreme more often than not.
\nIn either case, my resolution is to follow the call of the bodge. Swap the code design idealism for taking some shortcuts so far as they aren’t in contradiction of the scope of the project at hand.
\nThough the inner idealist might suggest that it’s worth putting in all the effort into every single detail of every single project, that isn’t sustainable and will just kill the joy of working on anything before it gives tangible results.
\nThe only two things that will matter in the end are whether the project works, and whether it works well for its users. It’s okay to put in details that the user will notice, but obsessing over the technical side will only get you so far, which actually isn’t that far at all.
\nIf the project never ships, it has failed both of these goals. Therefore making the tradeoff of perhaps relaxing code quality requirements in order to ship the project in the first place.
\nDespite what I claimed earlier, I acknowledge that the cathedral-ish nature of what I call “extreme OOP” has its merits and is popular for a reason. If one believes in test-driven development or extensive automatic testing, properly using object-oriented paradigms greatly simplifies accomplishing these goals.
\nUnfortunately, my aforementioned projects, as well as others I’m considering, are so small, menial and integrated that putting significant effort into unit testing would prolong their development cycle quite a bit, and they can usually be tested manually much faster.
\nThis is somewhat in line with opinions of people like Marco Arment and “Underscore” David Smith, who don’t do much, if any, unit testing on their apps, mostly testing them manually instead. I can’t commit to suggesting anyone else do this because it seems to go against the good practice, but perhaps there is place for a category of people who aren’t aligned with automatic testing.
\nPerhaps it’s worth pondering whether what we call “good practices” are much more than social stigmata, like the other norms we’ve been following without giving any thought to them, both within society and during programming.
\nI can’t claim this is a panacea by any measure. I’ve also noticed that this post speaks almost exclusively about myself and my experiences and opinions. It seems like this blog doesn’t actually have any recurring readers, so with this post I’m treating it more as a place to dump my thoughts in search of some clarity, and perhaps one day it will either come back to haunt me or be useful to someone else. Until then, it is what it is.
\n" }, { "id": "https://pn.id.lv/blog/2019/10/breaking-the-silence#7", "url": "https://pn.id.lv/blog/2019/10/breaking-the-silence", "title": "Breaking the Silence", "date_published": "2019-10-18T17:06:36+00:00", "date_updated": "2019-10-19T11:44:03+00:00", "content_html": "Since it appears that I still don’t have anything of value to impart upon this place that to me feels too pure for anything that I might say, as if my thoughts would somehow leave a stain upon this cathedral where only higher ideals are permitted, I suppose the best I can do is get over that and instead just put out some flow-of-consciousness style block of text and hope that something of value comes out of that.
\nSomething that I find detrimental with my current experience as a programmer is that I find it hard to focus on a single project for a prolonged period of time. Perhaps it is just my inner perfectionist that provides discouragement when attempting to work on a somewhat menial project with a limited audience, but nonetheless b3 has also fallen by the wayside as something that neither I nor anyone else will use much, hence all the progress appears to been halted.
\nWhile I do on occasion find something I want to share, most of which happens to be links, it doesn’t help that the format of this blog is not particularly suited for that. I did have plans to make b3 work with more than one type of post but I guess I couldn’t exert enough pressure upon myself to get that done. So the alternative would be to alter my workflow and create a new place and tool for posting more free-flowing content.
\nIf some tools for solving a certain problem already exist, but you find them only partially unfit for your workflow, then one might encounter the problem that the reward of doing this is inversely proportional to the delta between your ideal workflow and the one facilitated by those tools. If the mismatch results only in mild chafing, it’s unlikely that creating a new tool from scratch, especially if this new tool takes a lot of effort to create, will feel particularly good after you’ve done it, especially considering that this newly-created tool might not ever get much of an audience, and you might find losing interest in it as well.[1] This might be one of the reasons why I find it hard to work on anything really.
This might seem like an overly pessimistic position to take, and I’ll be the first to admit that it indeed is so, but I don’t think anything that other people suggest can help fight your subconscious which inherently doubts the success of anything that one might ever attempt. I can only admire those who possess the enthusiasm to hop onto working on new projects and be within a constant cycle of prototyping, abandoning anything that doesn’t look like it has the potential to be successful and not caring about that too much.
\nHence why I’ve been feeling somewhat stuck recently. Even if I happen to have ideas, those don’t tend to be long-lived as I find all the reasons why it’s not worth putting work into making them come to life. Some of my workflows are slightly chafing but nothing’s so dire that it demands a new tool to be created. Yet it’s not ideal, and every time I interact with a tool that I know I could make better it puts me off just a little.
\nIt’s similar with long-form writing for which this blog was intended. I don’t feel like I have as many opinions as I once did, which is a whole problem unto itself.
\nSimilarly, even though I want to get out the things I want to do in the near future, I feel like that’s actually detrimental to the possibility of me actually doing them. I believe it was CGP Grey who said something to the effect of this:[2] “when you share with others that you’re working on a thing, your brain gets the satisfaction of feeling like you’re doing productive work on the thing and you feel less inclined to actually do the thing.” This I find doubly true with myself, so I’m wary of sharing anything about my potential current or future projects with others because that seems to be the most surefire way to have them instead die prematurely.
So I won’t. Sorry.
\nEdit: A small project has been put into action. Do follow that, and do get in touch with any feedback you might have.
\nOne of the goals I’ve set for myself is to read more books. Therefore I wanted to share a single recommendation for now: “Because Internet” by Gretchen McCulloch. I got it on Tom Scott’s recommendation and I haven’t regretted it the least so far.
\nDo get in touch with any comments.
\n" }, { "id": "https://pn.id.lv/blog/2019/08/b3v2#5", "url": "https://pn.id.lv/blog/2019/08/b3v2", "title": "b3v2", "date_published": "2019-08-25T21:15:54+00:00", "date_updated": "2019-08-25T21:38:47+00:00", "content_html": "b3 is finally in a state of it being able to power this site. At last it’s more than a toolset for my personal use—now it’s closer to a blogware.
\nMuch of inspiration for b3 comes from the early versions of Movable Type which had a similarly constrained feature set, similarly uncomplicated code and similar constraints on its applicability.
\nThere are still things I want to effect, and I don’t really have anything particularly thoughtful to say,[1] but let this post serve as a historical landmark of achievement in the realm of my own.
Oh, also there’s now an Atom feed if you’re into that sort of thing. Also, sorry, the easter egg from this blog’s v1 got removed and replaced with a proper system-supported implementation of the same thing.
\n" }, { "id": "https://pn.id.lv/blog/2019/08/internal-meta-monologue#4", "url": "https://pn.id.lv/blog/2019/08/internal-meta-monologue", "title": "Internal Meta-Monologue", "date_published": "2019-08-13T19:40:00+00:00", "date_updated": "2019-08-26T12:36:52+00:00", "content_html": "Okay, so I have brought the first incarnation of b3 to a point where it can sustain blogging as far as I need it. There are still some things that are on my to-do list that I want to do, but I can’t actually bring myself to make progress on any of them.
\nTherefore I’m writing this document as a sort-of manifesto which shall dictate my future course of action vis-à-vis this blog and the software that makes it tick.
\nb3 hasn’t actually turned out to be what I intended for it to become. After a day or two of procrastinating, I’ve pulled together my thoughts on what went wrong and how will I fix it.
\nIn a recent email to an inspiration for this blog, I described the workflow used by most other technical bloggers nowadays: the post gets written, the site gets rendered via a static site generator such as Jekyll, and the result gets checked into a Git repo and pushed to deploy. This approach works for many, but it also involves a lot of friction which is especially noticable when it comes to publishing smaller pieces of content, e.g., microblogs or linklogs. I’ve found this friction is also enough to cause me to abandon any of my online writing efforts soon after getting the initial sketch up, as it is starting to happen now as well.
\nErgo I needed a model which imposed less friction, such as the one used by fully dynamic CMSes, for instance, Wordpress. The idea that the content is published directly and immediately, as well as the ability to have a nice instant preview and other such functionality, makes for a pretty low-friction model, which is why some technical bloggers continue to use Wordpress even for new blogging efforts.
\nOn the other hand, most such CMSes have their deficiencies: they’re often quite heavyweight, and under most configurations every request creates a hit to the database. I wish to have the posts themselves be fully static and the CMS itself to have the absolute minimum of code to implement this.
\nTaking all of this into account, some adept and experienced[1] readers might notice that what I strive for is similar to what’s provided by Movable Type. I really like Movable Type in principle, especially the older versions (1.1 and thereabouts), but unfortunately, I can’t use Movable Type itself since it is quite expensive and also it doesn’t align well with my philosophy of using open-source software where at all possible. Besides, I probably wouldn’t be actually able to use version 1.1 in any legal manner now.
It seems to me that implementing what I consider to be a simple CMS can’t be too difficult, so my current stance is that I’m going to reimplement Movable Type by myself. The next version of b3 is intended to be a static-site generator whose primary interface is the browser; basically the best of both Wordpress and Movable Type.[2]
For you, the reader, little will change. Actually, it’s my goal for nothing to change as of now—the major improvement is for me as the writer. In the future this system might allow me to add more dynamic features to the site , but it might be too early to speculate about that.
\nI do intend to continue developing b3 in the open, so in the unlikely case you’re seeking a CMS for use of your own and are not repulsed by the use of PHP, even though I’ll excuse it as being the more pragmatic choice for me, perhaps I might interest you in trying b3 out.
\n" }, { "id": "https://pn.id.lv/blog/2019/08/gitea-mirroring#3", "url": "https://pn.id.lv/blog/2019/08/gitea-mirroring", "title": "Mirroring Gitea repositories to GitHub", "date_published": "2019-08-08T19:15:00+00:00", "content_html": "Gitea is pretty neat if you want a self-hosted GitHub alternative for most part, even though it is sometimes a bit light on some features some Git suites provide.
\nAn example I’ve found useful in the past is GitLab’s push mirroring, which automatically pushes any changes in a local repo onto a remote one. [1] Gitea does not natively support this, but with some effort a post-receive
hook can do nearly [2] the same.
This tutorial is written with GitHub as the intended mirroring target in mind. The steps for other Git suites are similar but some particulars might differ.
\nBefore installing the hook, you’ll need to generate a single-use SSH key and register that as a write-enabled deploy key on the repository you’re intending to mirror to. I prefer ed25519
keys, but rsa
might be more compatible.
From a command line run these commands:
\ncd $(mktemp -d)
\nssh-keygen -t ed25519 -f key
(substitude ed25519
for rsa
if worried about compatibility)\ncat key.pub
\ncat key
\nrm $PWD; cd -
\nAfterwards, go to your Gitea instance and open the repository which you intend to mirror, go to its settings, open the tab “Git Hooks” and edit the post-receive hook. If there’s something within the “Hook Content” box, delete it and paste this in:
\n#!/bin/sh\nKEY="put your key here"\nREMOTE="put your remote URL here"\n\n##########\n\nkeyname=$(mktemp)\nchmod 0600 "$keyname"\necho "$KEY" >"$keyname"\nchmod 0400 "$keyname"\n\nknownhosts=$(mktemp)\necho 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >"$knownhosts"\nchmod 0600 "$knownhosts"\n\nGIT_SSH_COMMAND="ssh -i '$keyname' -o 'CheckHostIP no' -o 'UserKnownHostsFile $knownhosts'" git push --force --mirror "$REMOTE"\nrm "$keyname" "$knownhosts"\n
\nReplace the value of the KEY
variable (within double quotes) with the output from cat key
above. It’s okay for it to span multiple lines, and the final quotation mark should be on its own line.
Replace the value of the REMOTE
variable with the remote URL to mirror unto. For GitHub it should be something like git@github.com:username/repo.git
.
So all-in-all, the first couple lines of the hook should look something like this:
\nKEY="-----BEGIN OPENSSH PRIVATE KEY-----\nloremipsumdolorsitametconsecteturadipiscingelitfugitdoloresaperiamquia\netnonomnisullamconsequunturnumquamquitemporalaboriosamsedilloetcorpori\ninautrepudiandaequiminimadolorquiautemnihiletperspiciatisutetvoluptate\nquiacorporisminimanonvoluptatemquitenetureavoluptasesseconsequaturnamr\naperiamadiustodoloremexplicabomolestiasnequesedquodqui==\n-----END OPENSSH PRIVATE KEY-----\n"\nREMOTE="git@github.com:username/repo.git"\n
\nWhen that’s all done, click “Update Hook” and you should be good to go!
\nThe building block of this interaction is the Git hook, which is basically a shell script that Git runs whenever some event occurs. Some hooks allow for modifying Git’s behaviour or running code during commit, sync and other actions, but in this case we just push the changes we receive unto another repository.
\nUsing ephemeral SSH keys should be considered the best security practice since the only people who can access them are the ones who have privileges to see the Git hooks and could modify the repo anyway. The hook itself ensures that it’s unlikely for the key to leak.
\nAs a peculiarity of GitHub, a single deploy key can only be used with a single repository, so for each repo you’re intending to mirror you’ll need a new key. This is better from a security standpoint because a single key being compromised doesn’t grant access to all repositories that might be mirrored using this technique.
\nWithin the hook there is some chmod
ding to maintain the key security by ensuring other UNIX users can’t read it. This is required by SSH, otherwise it will just refuse to use the key at all.
Within the hook the knownhosts
file is populated as well. This is also a security precaution by SSH, and you might’ve encountered it occasionally as the message The authenticity of host (..) can't be established. Are you sure you want to continue connecting?
, and upon confirming the attempt the remote key will be saved for further connections. As we’re running in essentially an ephemeral context and the remote user can’t accept this, the known hosts file is generated on the fly. [3]
For debugging purposes -vvv
can be given to ssh within GIT_SSH_COMMAND
. This will print the entirety of ssh’s debugging output to the terminal where the push takes place.
If you have any questions or concerns, feel free to contact me.
\n" }, { "id": "https://pn.id.lv/blog/2019/08/hello-world#1", "url": "https://pn.id.lv/blog/2019/08/hello-world", "title": "Hello World", "date_published": "2019-08-06T18:54:30+00:00", "date_updated": "2019-08-25T19:29:17+00:00", "content_html": "Well then. Here we are.
\nIt’s been a while since I’ve had a blog. There has been more than one occasion when I wished I had one, but, alas, I didn’t.
\nI suppose that’s set to change now. I’ve gleaned just enough inspiration to push this thing over the edge, and you’re reading the result of that now.
\nDue to the fact that every web developer must go through the process of writing their own CMS at least once, here we are, I’ve done it now. Right now it’s little more than a static site generator (as is the trend), but I do intend on giving it some dynamic smarts à la Movable Type.
\nI’ve noted a trend lately where people who write their own CMS[1] from scratch often don’t include support for feeds. I can understand that since the number of people who use feeds for subscribing to sites has been dwindling ever since social media has become the conduit for internet content, but I want to buck this trend as far as I can.
RSS and feeds in general still remain the plumbing of the Internet, and large companies who aren’t really keen on the freedom of information circulation that feeds provide still keep the plumbing running, even if the system itself appears to be frozen in time since 2010 or so.
\nHence this site has a proper JSON Feed and an Atom one from the get-go.
\nThere are many blogs out there, most of which just contain long-form content. I’ve been having some thought experiments on how to better share content of other kinds, and I hope for this blog to eventually be an avenue for that.
\nOn the basis that most of these blog-like things built by software engineers tend to be open-source, I suppose I have no choice but to follow. So the source for this site is hosted on Github, to where it is mirrored from a private Gitea instance.
\nWhen the platform becomes more dynamic, I won’t be able to persist all content to the Git repo, but I’ll strive to keep the underlying tech open regardless so it may find some use elsewhere.
\n" }] }