compact musings of a multi faceted geek

My Ubuntu installation on my laptop somehow became hosed and I have no idea what I did wrong, but some application snatched keyboard shortcuts and I couldn't figure out which one and also other things felt weird and so after my experiences with earlier reinstalls I felt the urge to either do a clean reinstall (my desktop machine still runs stock Ubuntu and I'm generally happy with it) or to try something new. Now that I have a new desktop, my Laptop became somewhat of an experimental machine (a thing that surely will never bite me in the arse in the future, I'm sure), runing the fast insider version of Windows (So that I can play around with WSL2) and so I thought I'd continue to make this more experimental.

This machine now runs Manjaro, in the XFCE edition. Manjaro (and the underlying Arch linux foundations) have intrigued me for a while and I've used the Arch wiki as an excellent source of knowledge about all things Linux and hardware a lot of times before. I didn't want to go all in with Arch and Manjaro has a good reputation for being a solid desktop computing focused Arch derivative.

Handsonness + 1

So far, so good. I have to say, everything is quite a bit more hands on here. After installing packages that provide services (databases, etc.) you have to then activate those services on your own which may be actually a benefit but was irritating at first. With the help of AUR, the “inofficial” user generated package repo I have yet to come across a software I couldn't install via pacman.

I also had to adjust a couple of things by hand, such as the Synaptic driver settings for my touchpad, but that's something I struggled with on Ubuntu as well. It's one of those things where the Apple experience is really just so much better.

The dreaded HiDPI

One thing that is annoying is that XFCE still struggles a bit with HiDPI displays. In hindsight, this is one of the most annoying things about this laptop and I sometimes wonder if I wouldn't have been better off with a standard HD display which would simply work. Support on Gnome on Ubuntu was “fine”. It wasn't great and the odd application simply refuses to scale properly. Luckily, this display doesn't look too blurry on smaller resolutions (such as full HD as I'm running now) and the fact that I'm slowly turning blind (I'm developing an annoying longsightedness – seems to be about time, I guess) does help.

Apart from that XFCE is quite nice and is surprisingly undemanding – A breath of fresh air after the resource hog that a modern GNOME can be. After I installed it I hear the fans a lot less and battery life got a lot better as well.

What also needs to be said is how awesome it is that I can simply back up my home directory, install a new OS, dump my home directory and apart from a few nicks here and there (such as things like databases) everything prety much works.

I now have completed my migration away from Apple computers (as in general purpose computing devices, I still have and will keep my iPhone for now) – The last step was to get a proper modern monitor for my new desktop machine, as I was running on a 24” old-ish monitor a friend loaned me. I waffled on the monitor decision quite a bit, as I wasn't quite sure what my needs were. I wanted 4k, I wanted 27”, but I didn't want to spend a ton of money and I still wanted a decent monitor with, for example, good viewing angles (which more or less means IPS these days).

In the end I settled on the budget option, that, just by the look and feel and features, doesn't really feel low budget, the AOC U2777PQU. What I like about it is that it has HDMI, DP, DVI and VGA connectors, which makes attaching really old hardware relatively easy. It does do 60Hz via DP as one would expect and works like a charm behind my GTX 1050Ti, as one would expect.

The main issue I have with it right now is that scaling the UI on Linux to make it work with my old eyes but not use the space super inefficiently is currently not super straight forward. If all goes well, Ubuntu 19.04 should actually contain the Gnome version that re-introduces fractional scaling, so I'm hoping for that.

But what I wanted to talk about a little more is something that occurred to me recently: Me moving from the Mac, a platform I've used almost exclusively for the last 12 years or so, back to Windows and Linux, both systems I've used before 2006, was, in many ways, much more uneventful that I thought. Here's a couple of the things I was surprised about:

Keyboard and Mouse.

I almost exclusively used Trackpads after external trackpads became available from Apple. I used the multi finger gestures quite a bit and somehow, going back to a normal mouse wasn't quite the big deal I thought it would. Using the mouse wheel for scrolling feels effortless, and I'm even using the wheel-tilting horizontal scrolling quite a bit. I do struggle a bit more with the keyboard, mainly because most Linux and Windows programs lack the shortcuts that you use on a Mac for the same purpose as one uses Pos1 and End on a Windows keyboard. My problems are, I think, mostly down to me using a compact keyboard that lacks the extra block of keys over the arrow keys and so I have to somehow get used to Pos1 and End on the Number block which is something, I can safely say, I've never done before.

Essential software

I use Evolution for Email on Linux, mostly because of the very nice calendar integration.

I had so many issues with Airmail (and Mail.app) on the Mac that I don't even really miss the much nicer interface. At least in Evolution, search works very well, which is something I always struggled with on my Mac.

I do miss Fantastical, and I would pay serious money for a comparable calendar app on either Linux or Windows.

I use Firefox and Chrome as browsers, I run Spotify to listen to music, I use SublimeText as my editor of choice and I run Tilix, which is actually my favourite thing about the Linux desktop right now.

There's one thing that is a serious pain in the arse and that's Skype on Linux. It's incredibly buggy and I often have to tweak the audio settings using external tools (like the pulse audio volume control) as selecting a soundcard for output in Skype doesn't necessarily mean that Skype will use that output.

Unfortunately I also have to use Microsoft Teams which feels like the unloved child of Slack and Skype – There's not even an official client for that on Linux, and so I use an unoffcial Teams wrapper app that at least makes it possible to quickly set up calls, even though it is very buggy as well.

After my experiment with the Razer Blade Stealth proved somewhat successful (meaning I was sure I could survive on Windows/Linux without missing MacOS too much), it was about time to replace my Desktop computer as well.

First of all, why having a desktop at all in 2019, aren't Notebooks good enough? Well, yes and no. I just could have gotten a big screen and for a while I was doing just that with the Razer. The reasons why I wanted to replace my trusty Mid 2011 iMac with a new desktop are:

  • Just that tiny bit more power and storage space
  • More room to have a quieter thermal design (Something Apple's iMacs are somewhat good at, but we're getting to that)
  • More connectivity, specifically for my music gear

I've been surveying the market for a while now and while I did play with the thought of waiting for the MacPro or go with a new iMac, my overall positive experience with Windows and Linux with the Razer made me look at other alternatives. The cool thing about the PC market is that it's still really diverse, with catering to many vastly different needs. For example, you can get a tricked out gaming PC with water cooling for CPU and graphics card that will be quite expensive but will also be a beast performance wise, while you can also get very cheap, quiet and compact relatively low performance mini PCs that will do the job for the usual office work.

My performance requirements are somewhere in the middle there.

A wishlist

  • I wanted to have a rather decent graphics card, mostly to be able to drive a 4k display, but also to throw in the odd game and for that little bit of 3d modelling that I do.
  • I also wanted a CPU that would at least be in the same region as the Razer Blade Stealth one, probably better, to have a little more headroom for VM's or the odd web project build process.
  • I also wanted to have something that's considerably quieter than the Razer Blade Stealth is (when the fans spin up)
  • Extensibility would also not hurt, of course.

The other cool thing about the PC market is that you're not restricted to the big shots like Dell, HP etc. to buy from.

Rate my Setup

And so I bought my new PC from a small german vendor specializing in silent machines, Silentmaxx. I won't bore you with the full specs of this machine, but it's a current Intel i5, 16 GB of RAM, a 1TB SSD and has a GeForce GTX 1050 Ti graphics card. Not that that means anything to me. The cool thing about this machine is that it came with exactly zero moving parts (like fans or spinning disks), both the CPU and the GPU come with huge passive coolers.

I've since mounted a 2TB hard drive I had lying around into a thing called a HD Silencer which makes this the very quiet loudest part of this computer. If there's any additional noise in this room (music, the dishwasher from the kitchen, anything, really, the computer is impossible to hear. And I can't say how much I love it.

One thing that my PC doesn't have, and I'm not sure if that's going to be a problem down the line is a thunderbolt/USB-C interface. As far as I know my mainboard is Thunderbolt capable, it just needs an extra extension card to expose that.

I've since added an AOC U2777PQU 27” 4K monitor to the setup and with that I am almost done with the migration. One thing I want to add at some point is a better, more modern soundcard, but that has time.

In conclusion

I'll still keep my MacBookPro for now as I sometimes need to test Mac software for one of my gigs, but right now, even though the Windows/Linux world isn't perfect, I'm quite happy. This is by no means a decision for eternity (nothing in computing is, obviously) but if Apple doesn't radically change their approach to general purpose computing hardware, I can't see me changing back to a Mac any time soon. That being said, that has happened before, so we'll see.

All in all, including the monitor and other things like a new Keyboard-/Mouse combo, I've now roughly spent 4000 EUR for my new setup with a powerful laptop and an equally powerful, silent desktop. That's not too shabby.

I always used to be a strong proponent of automation. My line of thinking was: If a task can be automated, it must be somewhat unpleasant to do manually. And I still think that this argument fundamentally holds true. Nobody likes to paint 100 cars a day. Nobody likes to sort metal out of trash. (So, yes, I'm not talking developer automation here, a field in which I co-founded a company, we'll get to that later), etc.

I was also aware that the so-called automation dividend wasn't shared fairly between the owners of the means of production and the workers who now have to do less, or lost their jobs. Quite the opposite – The increasing wealth gap in first world countries is at least partially a function of that unfair share the already rich are putting in their pockets. The oh-so-successful german export industry for example, is mainly built on wages that have more or less been stagnant (in relation to increase in cost of living) for decades.

Sorry, I didn't know better

So, I was aware of all that and still, people who fought against very obvious automations always annoyed me. I had a discussion with a friend of my late father once, who basically, before he finally retired a couple of years back, lost his job at least twice. He was a typesetter until that was not a thing anymore (replaced, finally for good, when DTP became viable) then became a printer and his job once again was replaced as the dominant printing method changed. That being said, he was able to go with the times and work until his retirement. He was very much an opponent of automation for automation's sake and he had good arguments, too. But for me, it didn't make sense. Printers (the human kind) are often plagued with all sorts of illnesses connected to the toxic ingredients in paint (and thinners and everything) and the less humans have to get in contact with these things, the better. Right?

Two tweets from Marijn Haverbeke finally brought me the epiphany I needed:

Capitalists: I want to produce without labor, taking away the last bargaining chip of the capital-less Us, technologists: wow that's an interesting technical challenge, let me see how I can help you with that // tweet with additional context here

(To be fair, some of the same tech could help build fully automated space communism, but given current power relations we don't seem to be heading there.) // tweet here

Yeah, about that...

And then I just realized. The issue is not automation in itself. As Marijn so cleverly stated (the second tweet is a nod to the “fully automated luxury gay space communism” meme as I learned later), it can be used for realizing both a luxurious utopia and the dystopia that's already arrived but is not evenly distributed yet. The reason we're closer to the latter than the former is, for the most part, the power dynamics in politics. The political left is in shambles in many countries, including my own, the conservatives are driven by an increasingly fringe right and while everyone should know by now (at least after The Picketty™ happened) that neoliberal economics will lead to nothing good (to put it mildly), there's no political force right now to really question the current status quo.

All this leads to the conclusion, that I simply wasn't able to make before I read Marijn's tweet: Until these power dynamics are seriously challenged and we're seeing a much more fair distribution of the automation dividend, yes, there's lots of really good reason to oppose any form of automation that automates people out of their job.

We're part of the problem

Which brings me to two things I wanted to mention at the end:

a. I think, we as developers need to be increasingly aware that we are part of the (or at least a) problem. Also, we're, for the most part, part of “the rich”, at least by some definition. b. In our own industry, I think we as developers actually get an unfairly large chunk of the automation dividend in the first place, which makes me feel slightly less bad for building developer tooling that at least has the potential to automate people out of their jobs. (Given the current shortage for developers in most places, it's highly unlikely to have that effect, but I guess it can't hurt to be aware of the potential)


Let me close this with some meta stuff: Some of these political posts I made here recently may seem shoddily written – and they are. This is why they end up here on this new platform and not on my official blog. I used to make these kind of posts (and even more pointed, in much shorter form) on my main blog but that was before twitter. The thing is, if I would not publish them as quickly as I do and try to turn them into more polished gems, I would not publish them, as has happend with countless blog posts in the draft folder of my official blog.

Please tell me if you like these posts or if you notice anything horribly wrong in my reasoning. (Then again, somebody has to do the job of being wrong on the internet, right?)

If you didn't notice, you can follow this microblog write.as-thing on the fediverse (for example, you can follow me if you're on mastodon) at @halfbyte@write.halfbyte.org.

This morning “someone on twitter” posted a link to the CNBC article, which made the rounds (and even turned up on daring fireball – The twitter user who I deeply respect, reacted with the reasonable outrage at the inhuman question put out by the worlds largest human aid organisation, Goldman Sachs. Since you can't see my face right now (and also I'm really good at keeping a straight face, at least when no one's looking): yes that was sarcasm.

I tried to argue that I find this statement quite logical and actually not that controversial, which did not fall on understanding grounds. I then figured out that I actually misinterpreted the original quote.

If only I could read properly

The part that angers people about this is of course the “curing” part. The article talks about the sustainability of a business model based on “one shot” genenetic engineering cures. I somehow managed to completely ignore that part and assumed GS made a much more general statement ala “Is helping sick people a sustainable business model”.

Now, while that's two very different questions, I think, in the broader conversation about how health care is financed and structured, the difference is not as big and important as for the specific discussion of how bad a human being a Goldman Sachs consultant can be.

First of all, I think, in the current climate (and especially given that we would need to assume that this statement was pretty much made in the context of the completely broken US health care system), GS would not ask that second question, as, yes, for many companies, human health is indeed a very good, sustainable business model. So, that was a completely unforced error on my end.

Who gets to benefit from the current system?

But the actual question GS asks as cited in that article is connected, as it shows, very openly (which should give you a hint on the current environment we're in) a deep conflict in for-profit healthcare. This can be generalized for free market capitalism in general in the form of “What's best for the customer (patient) is often not the best for the company or producer”.

In the context of the health care system, this is a huge problem. The main focus of the health care system should be to heal people. In the best way possible, given the constraints (cost, time, labor, you name it). If a cure doesn't get developed because it's one-off nature makes it hard or impossible to turn it into a sustainable business model, this is a failure of the health care system, as in every other aspect, this would be the optimal solution. Every subsequent touch point with the health care system for a patient that didn't get that one-off cure puts an additional strain on an already loaded system. The only beneficiaries: The private companies who get paid for the extra visit and who developed a cure that does not do one-off healing but instead just eases the symptoms (as you can see: A much better business model). Everyone else – The overworked hospital or doctor's staff, the state that still pretty much subsidizes every aspect of health care up to a point and most importantly the patient – worse off.

Which actually leads us to the question that I thought I heard when I read the article the first time: Is health care a sustainable business model – Yes, for private companies, who, through a ton of lobbying, shady business practice and a lot of unforced errors on behalf of the regulators, managed to carve out a nice niche for them in a system that in many countries (including praised Germany) relies on exploiting it's personnel and to a large degree only functions because people do their utmost best to keep it running.

Privatize gains, socialize losses

But this also shines light onto one of the main problems with this construct: Health care is one of the most crass examples of the principle of “privatize gains, socialize losses”. The people, either in the form of super expensive private health insurance policies or taxes or both, are taking the losses and keep the system running, because, for most of the developed world, letting poor people die is not exactly an option (At least for people who are already in the country. Totally fine to let them die while crossing the Mediterranean Sea, but I digress), while for-profit companies are bagging the profits. And this is not only true for things like running hospitals: In many countries, the state (aka taxes) pays for most of the cost of medical research at universities (I believe, the system is slightly different in the US), where the results are then often privatized with way too little compensation and, more importantly, increasingly without any regulation on how much profit these companies are allowed to make after bringing new cures to the market – Again, bagging profits while at least large parts of the initial cost was carried by the public.

This is not to argue that publicly funded medical research should not happen, quite the contrary – One thing that happened throughout the last few decades is that due to increasingly close relationships between pharmaceutical companies and universities, research has shifted in many places towards medical research that looks promising in terms of profitability when produced later in contrast to the things you really want, like finding a good solution for the antibiotics crisis or, maybe, developing one-off cures for diseases we can't heal right now.

And I'm also not suggesting that we should socialize pharmaceutical companies. In some cases, for example for low margin standard medicine that has been around for ages (Insulin comes to mind) that might actually work, but I think a more important part is to bring back proper regulation. Make sure companies cannot coerce whole markets into paying ridiculous prices for cheap to make standard stuff (again, Insulin, but also the Epi-Pen comes to mind).

And also, maybe, just in general, come to terms with the fact that we've now tested the hypothesis that the whole health care sector can be organized by market forces and for profit companies in a cheaper way and everything that has happened is that cost for the tax payer has exploded while an increasingly concentrated medical supplies and pharmaceutical industry takes away record profits.

So, yes, I think the question that Goldman Sachs posed in that article is a valid one, even if it comes from a very dark, inhumane and cynical place and instead of spending time arguing against those fuckers, maybe we should start discussing how we can rebuild our social and health care systems in a way that puts patients and their health first, health care personnel second and is financially viable. I'm afraid the answers we may come up with won't go down so well with Goldman Sachs, but again, maybe we shouldn't care so much about that.

For a couple of weeks now I started to use Windows more (mostly for one specific project) as I need two very specific things:

  1. A windows test setup. Theoretically I could migrate my old VMs to Linux and run them from there, but the tests I do are very hardware specific and usually running Windows in a VM gives, unfortunately different results.
  2. A VPN connection. This frustrates me the most as I simply can't make that connection work under Linux which feels wrong.

The way I work is that for the most part I run a VM with a small Linux setup that has my project installed and I also use that VM to run my editor. I know this is probably not the best setup, but I did, by now run into so many weird issues with developing directly on Windows that I don't even trust solutions like Vagrant. And while this project would allow me to work on windows, everything is actually faster within the VM than on native windows (or WSL for that matter). This also feels wrong, but my time is too precious to do a deep dive in investigating the matter.

During my Depfu development time, I'm almost exclusively on Linux, where I have now settled on a stock Ubuntu with very little extras.

My plan is now to maybe during the next few months to set up an actual desktop workstation (and hopefully get a better monitor) – I'm currently pondering a Silentmaxx machine, as they manage to get their beefy machines to be absolutely silent. My Razer is, especially when running Windows with VMs, mostly not very quiet (It's also not as loud as my old PC, but after spending years on an iMac, my standards are now somewhat different.

The reason I want a real desktop machine is to have a little extra headroom both CPU and memory wise and also have something that can be a bit more future proof and upgradeable. The prospect of getting a box that fulfills that and manages to do so entirely while beeing passively cooled is very appealing.

But what about the phone.

Now that I have abandoned Apple on the general purpose computing side of things, I also started to wonder about my phone. My current phone is an iPhone 6s and I'm still quite happy with it. My original plan was to replace it with one of the new phones, but looking at the lineup, price wise, the Xr is the only option for me (only barely, too) and I think it's actually a bit too big for me. Now, I am a bit reluctant to change over to the Android side of things for various reasons and so, after much thinking, I have decided to not decide things yet.

Enter your email to subscribe to updates.