# Christian's Public TODO List Welcome to my TODO list. You can see I have a lot to do, so why not pick something off the list and do it for me! :-) ## Builder Builder is an initiative to build a new IDE for GNOME that is 100% focused on writing GNOME software. I would like it to use the Continuous Build Server to execute builds within a virtual machine (aka Simulator). ## Automake Project Helper tools (aka Fig) This project is an attempt to make general project management with automake easier by providing good conventions. Projects built with this will "Just Work" in Builder and easily build with the Continuous System. ## Blueprint Blueprint is an initiative to visualize GObject Introspection information as a graph. This is essentially a read-only UML view of a .gir/.typelib file. The basic is to take the .typelib info and generate a .dot file. Then pass that to dot/neato/etc and have it perform the graph layout algorithm. Then result of this is then read back in and updates the scene graph. ## Document-based Client-Side Database Think like SQLite, but using GVariant or BSON for encoding. My current preference is GVariant. This is going to require some performance testing on when to use mmap() and when not to use mmap(). Mobile and Desktop oriented only. ## GLib Memory and Performance Improvements Various things going on in GLib are actually slowing us down and taking up more memory. I suspect that this includes GSlice being slower than malloc as well as taking more memory in almost all cases than modern malloc on Glibc. Windows is probably one of the few things being helped here. Refcounting is slow. We should look at taking an extra dereference to do ++/-- instead of atomic operations when threading support is not needed. ## Continuous Build Server Build systems such as Jenkins and buildbot rely too much on modern languages and features to be truely useful to me. Software I write needs to be built on more operating systems than just modern Linux, OS X, or Windows 7. Truly portable software requires a portable continuous build system. This means it should run on more platforms than my software does. * It's okay to only support Git. * Portable C89 with minimal dependencies. * Run on at least Solaris 10, RedHat 7.3 (from ~1999), Windows XP, FreeBSD 4.4. * Developers should be able to submit their working tree to be built. * All commits to master are built, build results attached as notes on the git tree. * All release branches are built on every commit with release configurations. - So allow for specifying multiple configurations on a single host. For example, on Solaris I need to build with both Sun Pro CC and GCC. * Agents should be able to verify host and host verify agent with X.509 certs. - This is needed so you can trust the build output. * Ability to save intermediate output (debugging symbols, etc) * Ability to archive tarballs (mongo-c-driver-0.94.3+gitrev.tar.gz) * Work out of the box with Automake with virtually zero configuration. * Builds should be fast by default, with occasional full rebuilds. - If a fast-build breaks, automatically retry a full build. So call ./autogen.sh if ./configure/make break. * Run the tests under gdb so if there is an error a set of debugging helpers can extract state. (backtrace, locals, etc). - Allow remote attachment to the debugger if build option is set? + This could allow a web interface to debug directly when viewing the failure for up to X minutes after the failure. ## Password Synchronization I have lots of passwords. I try to always generate a new password for every website I reluctantly create an account on. Usually in the 32-64 character range. Remembering them all is simply impossible for my feeble mind. But thankfully Chrome can use Seahorse for key storage. (I'm not sure if epiphany is doing this yet, I didn't see any saved website passwords in there). Additionally, I update my GPG keyring occasionally and it's always a pain when I have to refetch a key because my normal keyring is on another machine. I even have a few SSH keys that are used with various third-party systems. This means that every time I switch machines I there is some minor security inconvenience that could weaken my security. For example, ever send an email without encryption simply because you didn't have your GPG keyring handy? ### Two-way Merge of Keys We can probably cover the 99% case for two-way merge in very little code. The only real difficult parse is if both keyrings are modified with conflicting information (latest timestamp wins? just duplicate when possible?) Seahorse could detect a udev event and upon mount of the file-system look for a well known file-name, .seahorse-db or something. It would ask for the passphrase for the file (or use existing credentials if they are cached). The synchronization process would extract updated keys and bring them local. import the gpg keys, import the ssh keys, import website credentials. If there have been key removals, seahorse should cache that information for future sync events with the USB stick. (but only as much information needed to make the deletion). # Github Flavorted Markdown for GLib I would like a github flavored compatible markdown parser for GLib. It should also support code highlighting and such via `highlight' or similar. I have some patches upon the gs-markdown from GNOME software, but I think it needs to be parsed in a bit different of a fashion. # Relocatable Developer Tools I want a single Git repo I can keep all of my system related configuration for development in. Not one for vim, one for IDE settings, one for blah blah blah. I want my shell aliases (i=sudo yum install, u=sudo yum update, ...) available everywhere. It should automatically be kept in sync for me. # Distributed Database using IPv6 for routing IPv6 might actually have enough unique space for routing requests for data via a primary OID. This means that you can perform all of the request operation routing in hardware ASIC instead of inside the application layer portion of your packet. Secondary Index + Routed IPv6 for final lookup. Scaling your database could be done by updating routing tables or adding a machine to the router to own a range of IPs. (Therefore taking over the range of keyspace in the primary index). You could reach a higher theoretical throughput since some of your data would be outside of the typical packet contents. Perhaps not much though. Worth experimenting.