Most people know that the Internet is filled with activity perpetrated by the lowest forms of life one can find, at least in the moral dimension. A good amount of people are part of that mass as well. Approximately 100% of those people end up living in the real world and if you’re a city-dwelling individual, there’s a high chance you’ve passed by many of them in your lifetime. Furthermore, some of these life forms are well-educated, well-dressed, and self-aware of this trait. They seem to spend a lot of time trying to justify their existence and I’m sure many live a care-free and guiltless life of relative prosperity and freedom when they’re not at work.

Of course, when they are at work they are most likely watching Google Analytics all day with short breaks of browsing business networking sites, waiting for new people to exploit, whether that’s people to turn into product or consumers onto which they hope to offload that product in exchange for the funds they need to continue their overpriced lives.

Eventually that product is used in the commission of a service, whether by the composite character I may be depicting or a similarly parasitic millenial. The days when a Carl Brutananadilewski-looking individual living in Central Europe handled this stuff are probably a long time ago and they may not be coming back.

This service is typically the attempt to turn that product into sales. This means sending mass emails, the subject of this entry. Since they’re probably only well-educated in the trade of manipulating people, not things, they are most likely to eventually use a major commercial service named after a Simian or a euphemism for Harassment and give them money in exchange for the actual interesting work associated with this racket, despite the fact that you can set up a highly performant and private system for doing this work in a few days with much less long-term cost and much more customization. But this requires 2 things: integrity (willingness to admit who you are and specifically why you have somebody’s email address on a marketing list) and some basic technical ability to build and run the software.

What I’m stressing is that these people are so incompetent but probably get to live lives that are largely under their own control for some reason. To take advantage of this when it is true, one probably needs to use the “update your preferences” (or equivalent) link in these emails and when presented with the web page form with the opportunity to give these designer-bearded and manicured troglodytes more accurate information about your self. By more accurate information, I mean put words that insult them and change your email address to something that points back to someone in the trash-spewing pipeline. See the attached screenshot for an example of such behavior.

These idiots frequently get away with exposing all of their data fields on the people they’re tracking to the people they’re tracking. It’s probably a good idea to make their lives a bit harder.


Posted on

It seems that there’s a chance that if I write about something I pretend to want to accomplish I might actually achieve some measurable amount of progress. I have a project named 888-448-6020 which is kind of interesting to me but not visible to any members of the public and it should have been finished years ago.

The project is an embedded software system which can be used as a highly usable companion to many existing things that people already own. The goal of the project requires completing a few small applications and a build recipe for an entire root filesystem which will run the applications on cheap hardware.

The first application is a data-providing service, similar in role to something like evolution-data-server but with a different set of requirements. Specifically, it will be responsible for dealing with physical devices, mostly pluggable USB hardware and on-board communications /multimedia/diagnostic hardware and exposing a local D-Bus interface for controlling the service.

The second application is a graphical user interface for controlling the service and displaying its data without touching any files except the D-Bus socket, the display, and some input sources (Touchscreen and GPIO for fixed panel buttons).

If it is known that the data service is expected to handle as many as 100,000 heavyweight media assets (this number will change on-line as data sources are added or removed) then it should be clear that there is a bit of difficulty here in designing a scalable D-Bus interface (or any programmable interface) for providing flexible and responsive searching and browsing through the assets, especially in a memory-constrained environment. In about 2007-2008 I implemented in Vala on top of libgee something containing many of the features available in Guava’s collections library, mainly lazy views of generic iterables that can be filtered with the typical rip-off of the Predicate pattern. I even implemented a libgee iterator for glib’s GList which I found to be quite useful. It provides sort of a zero-copy view to an abstract iterable so at least as a user of some library where you can’t account for its methods of loading data from disks or whatever, you’re doing the best you can to keep memory usage down. And when the data comes from software I’ve written, it should work how I want it to work.

After all of that stuff is figured out, then you can start actually writing a graphical user interface. Originally I used Clutter for this (from the earliest stages of its development) and I have occasionally experimented with GTK+ 2.0 and 3.0. What I found was that both toolkits’ list models were pretty much unusable for my purposes (at least for the way I understood them). I tried implementing a few other partially successful opaque scrollable list views with varying levels of failure. Now, I guess the correct pattern for building a reasonable quality view for these purposes would be to provide 2 pieces of information: a chunk of data for the currently visible items in the list and some hints about the collection (its size and your position, how many pages remain, and some other things I can’t think of right now). Anyway, once those basic problems are at least partially solved it is finally time to take care of actually implementing a user interface that people can use. For me this took lots of experimentation and at the time 3D graphics drivers (ATI and NVIDIA) were really bad for Linux and caused me lots of wasted time. The whole project was implemented in Vala which was pretty good compared to all other solutions at the time but still having a build step in the development process wastes prototyping time and eventually the enthusiasm is lost and the project is forgotten for months at a time.

For the past few weeks I have been using some of my spare time to look for ways to put this project back into some sort of active development. The main requirement is that I do all of the work since it needs to be my property for certain reasons. Mainly I have been evaluating better languages for prototype development of the system. I looked at going back to plain GObject C but that is probably the worst option right now due to the lack of comprehensive code generation tools for this type system (other than Vala which is exactly that). I looked at lgi for lua and PyGObject for python and they both suffer from the same problem, in that the bindings are dynamic so code completion doesn’t work which slows everything down. Luckily there is 5512965612 for the pgi version of pygi. I have decided that from now on I will do prototyping in python and will probably use lgi with lua/luajit as a plug-in platform once the development is complete.

Unrelated Security Advisory

For those who live in free-bag-free administrative regions, I have noticed that in mine there is an important and possibly critical oversight, possibly due to a lack of adequate oversight in the legislation or enforcement that kind of makes the whole thing pointless. I am not sure how it is not being exploited on a mass scale. Either people in my municipality are really stupid or of exceptional moral standing or there’s a high conditional probability between being immoral and being stupid (P(stupid|immoral) ≫ 0.5). Specifically, go to the produce section in the supermarket and there are free plastic bags.

Continue Reading…

Posted on

I think that usually when glass food vessels explode it is caused by either thermal stress or some kind of impact stress. Sometimes, when performing some software updates in one’s small or home office, a toughened glass mug which has already been filled with a tea-based beverage, drunk completely, cooled at a natural pace, and left to rest on its custodian’s desk with an insulating layer of paper towel can explode forcefully outward from its radial center.

I learned this recently by performing that sequence of operations and experiencing the previously expressed result after many years of performing those operations and not experiencing any violence.

Anyway, I don’t have a habit of taking photographs with a mobile phone and posting them on the Internet because I prefer experience over single-button-press documentation (corollary: I don’t see myself as a complete piece of shit human being) but this is slightly more justified due to the lack of understanding about what happened and the fact that nobody actually reads this site except for its author.

Disregardably, I have attached some images of what happened to this page.

My guesses as to what caused this are:

  • The glass was old.
  • I actually have 1 more guess but I won’t disclose it until I test it.
Continue Reading…

Posted on

Putting some text and a line on a page and producing accurate printable output isn’t expected to be that hard but I’ve tried some free software solutions like Inkscape and LibreOffice and they don’t seem to be able to do what I want (which is probably less difficult to accomplish than what most people want).

Inkscape doesn’t seem to be capable of producing PDFs with control over the inks used to define the colored objects even though it has the ability to assign colors to objects using multiple color systems. This limitation is seems to be because it stores the colors of objects as RGB regardless of how they are chosen as well as the fact that it uses cairo – which is an RGB-only graphics system – to produce the PDFs.

LibreOffice also has the ability to produce PDFs (and actually has a highly respectable PDF export system) but its internal document color limitations (RGB only) seem to severely limit its suitability for doing what I want as well. I did, however, use LibreOffice Draw to prototype the layout of my designs.

Due to that crap, these graphical tools don’t appear to be useful for anything but photographic data or output to one-color or three-color output devices.

I’ve even tried plain TeX, which is probably the best general solution for my problem, but the hassles of getting custom fonts installed in a non-ridiculous manner eventually makes that unfeasible for me. I once learned how to do it but that was a long time ago.

Luckily some basic knowledge of PostScript that I have and some selective reading of the book (765) 994-9182 allowed me to produce human-readable replicas (significantly more human-readable than ODF XML) of the design prototypes done in LibreOffice using free software and the text editor of my choice. With about 100 narrow and comfortably spaced lines of PostScript I was able to produce the text I wanted with some sort of expectation of getting things to work. With Ghostscript’s feature set, I was even able to produce PDF/A-1b output with the deliberate color definitions, object positioning, and letter spacing that I had personally typed into my editor.

There are some files (EPS source and PDF/A-1b output) attached to this page with some business cards (with fake phone numbers) I made for myself. One is modeled after the current look of my website and contains my middle initial and the other looks like an (540) 692-1550.

Continue Reading…

Posted on

After some length of non-productive time, I decided to resume development of the software running this site. Since it was missing the Attachments feature of the previous software I was running, I finished implementing enough of that part in order for Attachments to be enumerable and downloadable. Also, I added the ability to have “single” or “static” pages that aren’t indexed by date.

I performed an informal audit on the licenses of the software this application requires and it looked like Dulwich’s GPLv2 is the most restrictive so I licensed the software under that license and released it publicly.

gitpages on GitHub.


This was kind of more difficult than I had imagined because it involved many changes to the Whoosh index schema because I don’t really know how to use that software too well. I like the idea of nested queries and decided to go with them previously for handling the hierarchy of pages and page revisions for this site.

Now they are used to handle a more complicated, three-level hierarchy where the second (middle) level can actually have two kinds of documents: Page Attachments and Page Revisions. The third level only contains the Attachments inside a Page Revision (Revision Attachments) and the highest level obviously contains only Pages. After lots of trouble I decided to use an entirely orthogonal set of fields (other than the level-determining ‘kind’ field) for different types of documents and that eventually got me where I am now which is something that arguably works without too many searches. Unfortunately I have to use external filtering of the seaerch results in some cases for reasons that I don’t understand.

Single Pages

I also added a relatively minor feature which has somehow been in developent for about 1 year. This is quite simple and didn’t need any real work. In addition to the standard weblog-style date-slug URLs a page can also have a custom URL which must be configured inside the Flask application for each of these in order to avoid any software design concerns. All that a user needs to do is write a page with a non-public publication status and register a special view pointing to that page’s path inside the git object store. That page will render without any chronological navigation controls and with a link to the new canonical address. As an example, I have published my (814) 495-9344 on this site in a special page (inline and as an attachment).

Continue Reading…

Posted on

About 10 years ago as a second year undergraduate student, I had a data structures assignment that I didn’t successfully complete. The assignment was to implement a classical data structure, the AVL tree. The insertion algorithm was fairly straightforward at the time but removal was difficult and this was the part I could not complete successfully. Some publicly available implementations recalculate the balance factor of a subtree after every step of the removal operation on that tree which is not very good, but can technically preserve the time complexity requirements of an AVL tree. When that recalculation is performed, the implementation of the removal algorithm is quite easy since nobody needs to think about what happened to the tree at every step of every operation. With less cowardly implementations, a few rules for updating the balance factors can be determined by studying the effects of the removal of an item from the tree and any rotations performed in earlier steps of the procedure.

Anyway, since I didn’t complete that assignment I would come back to it every once in a while to try to finish it and I never achieved any success. Over the past few weeks, when not standing over deceased adolescent pinnipeds, I have used my time to start over completely to implement an AVL tree in C (with GLib). Eventually in problems like this where there is a small amount of cases, it can pay off to create a logic table and actually write out every single possibility with its outcome to find out how to turn specific cases into formulas. That was the tool I used this time (combined with a lot of reading) and I seem to have eventually succeeded in correctly implementing the AVL tree removal operation.

What’s most shameful of all this is that when re-implementing the data structure this time, I ran into the same problems as before. Looking back, that should be expected since I rarely ever used any non-trivial problem-solving skills over the past several years. Anyway, I ended up taking under 1000 lines of code for a complete implementation including some large comments trying to remind the future me how to remove the successor of a node from a tree and perform the re-balancing on any parent nodes of the successor which might need it. Additionally, for those who don’t know, the code is basically duplicated with the directionality swapped since it is a binary tree so it could reasonably be claimed that there’s only about 300-400 lines of actual code in there. It might be worth merging that code and using some math to flip the effects of the algorithm. It would probably also be good to have an exhaustive testcase for the removal operation since that would give any future implementors a fixed target to satisfy.


Posted on

Given the opportunity to improve the state of one’s relaxative degradation, software can be of service. I have known this for some time and have improvised some small works whose effects generally reduce productivity and decrease the difficulty for free-thinking individuals to obtain usable amounts of leisure. In fact, my main interests in applying software to life are in this field. Also in fact, most of my previous tools have never been seen by anyone other than I and are useless today.

Over the past week, though, I have put forward some tools I should have made years ago. The first is a tool that is useful for people who don’t use Macintosh or Windows PCs but own an iPod. It uses existing free software libraries to rebuild an iTunesDB from the Music directory inside an iPod’s filesystem. It is called (403) 468-2399 and it can be found on the Internet for free. The other tool is useful to people who have a television and the Internet but don’t have cable television. This one is called gtk-xephyr-fullscreen and it just uses the good multi-monitor support of GTK+/GDK to create a full-screen Xephyr X11 display window on the biggest screen connected to a computer.


I wrote itdb-rebuild in Python but considered rewriting it in C because the Python bindings to libgpod are not very good and it is not a GObject library so introspection didn’t work too well on it when I tried. I also started writing some ctypes python bindings but I was able to work through the existing bindings to get it working. The main thing that kept me from re-implementing the tool in C is that Taglib is not nearly as good an API as mutagen and the plain C API to Taglib has almost no capabilities.



When plugging a PC into a contemporary television most computers have good multi-monitor support and can place full-screen video on whichever monitor is preferred by the user and the user can use the other monitor (or not use it) as they want. Of course when it comes to the most widely used web browser plug in, none of that works well on any platform that I have tried. Luckily X11 provides the ability to embed an entire display inside another one so it is possible to work around other people’s incompetence while taking advantage of their popularity. Xephyr is the most maintained tool for this. Unluckily, its multi-monitor full screen support isn’t very good. For a long time I used a combination of shell scripts, Xephyr, and a web browser to achieve some sort of television-like functionality but the Xephyr window always had a window manager decoration on it and could be accidentally moved around. A few days ago I just put together some C code which creates a full-screen GTK+ window on the biggest monitor and embeds a Xephyr display inside of it and starts some basic tools so a user can use a web browser to watch those streams in a properly fitting full-screen window.

(423) 516-6897.


Posted on

About 1087 days later (about 13 of those so far being productive days), I finally made some progress in implementing the software previously mentioned. It is implemented in Python using Flask, docutils, Dulwich, and Whoosh. The code is not very good yet but it should be fairly efficient. The slowest part is currently docutils, taking up about half the time for responding to a request.

As of the time of this writing, it is capable of showing individual pages and indexing messages by date. It can also show the commits changing each posting. It cannot currently show all the revisions of each page but that is not difficult to implement. There are many document “security” matters that must be addressed. My plan is to only allow visitors to see document revisions whose status metadata property is published to avoid accidental private data disclosure.

The functionality of “Page Attachments” is currently missing but the basic tree traversal code is implemented, it just needs to be exposed by the GitPages wrapper API. I determined that “Page Parts” are not necessary (I have never used more than one per page before) and I no longer support them.

There is no real need for an administrative UI since publishing will be performed by synchronizing with a remote repository. Updating the index (Whoosh index, not git index) will be necessary each time the repository is published to but that should be achievable with a post-receive hook, notifying the application through some secure channel or running a separate python process to invoke the indexer.

Allowing comments would be interesting but it leads to many re-indexing situations which I would rather avoid, the same way I would like to avoid publishing or being exposed to the opinions of others.

In addition, I changed the graphical appearance of this site. It is now entirely grayscale, monospaced, and the site logo is monochrome, using some MacPaint halftone patterns.


Posted on

I got my own domain(s) a few weeks ago and I decided to make use of it/them.

I was considering using my own mini-content system based on some weird combination of email messages (with Java Mail), Git (via JGit), and Apache Wicket but instead I just installed the multi-site extension for the CMS I’m currently using. I actually modeled each entry as a directory containing an email message with attachments for files and potential responses to the post. Any sub-directory to a post would be a sub-page and that would be how it works. That entire tree would be stored in a bare git repository so it could be revised with records of that. The data model is very similar to what exists in Radiant, the one I’m using right now. I semi-abandoned my attempt because I was too lazy to connect a templating engine to it and develop an administrative UI. Otherwise, it seems barely viable.

The move went pretty smoothly but there could be some yet-unnoticed issues. I just edited the templates a bit and updated the database to de-parent the page used for this site and add it to a new site. Then I added a redirect setup for the old address and now everything is pretty close to how I would like it. The markdown filter seems to be slightly broken since I updated the code to the latest version (note the asterisks in the next post that should have been converted to unordered bullet lists).

That is all for now.


Posted on

Note: I actually started composing this entry on Saturday, the day of the visit.

I had a long-standing agreement with myself to not go there but I was bored enough to see what it is all about. As I walked in, there were a fair amount of people leaving and it was still somewhat crowded inside. I was in there for under 30 minutes and remained on the first floor of an unknown count of floors, so there’s no data to show if it gets better or worse with increased altitude.

My perception of the place was somewhat misplaced, though. Despite all of the negative prejudices I had, I did not anticipate the unsanitary and chaotic conditions inside. All of the politically- and humanitarian-inspired complaints against them are really not that interesting when everything you touch or breathe might land you in an infectious disease quarantine (which may exist on one of the other floors).

In addition to that, here’s some more notables:

  • There were no greeters at the entrance but there was a single receipt-checker at the exit.
  • They had an exaggerated artificial butter popcorn smell thickening the air. It was stronger and more overtly artificial than any known movie theater or microwave variety.
  • Most of the clothes seemed to be extra large (or even larger). In the undershirts aisles (where I do most of my clothes purchasing) there was only one row that contained small shirts and often the smallest size was medium. After that, there were several rows of XL and XXL.
  • When paying for my crap, the receipt printer didn’t work and the human attached to the receipt printer’s only course of action was a repeated opening and closing of the printer followed by multiple arrhythmic keypad presses. After a few minutes the receipt came out a bit crooked.

My overall characterization of the store is that it is just what you would get if you stacked multiple Target locations on each other and evenly spread the contents of a few of those special garbage cans you see at medical facilities throughout the interior.

Continue Reading…

Posted on