Last week I was at a notary here in Munich to officially put Openismus GmbH into its liquidation phase, after seven years. The company is closing down, though with no debts and with a little left over. I feel good about that.
This has been the plan for a while since it became harder to get reliable customer work, though that was more a result of the company structure and my own time constraints than any particular change in the tech economy. It became possible once the last few employees had found good jobs to move on to.
I got past any sadness about this a long time ago. I guess it would be nice for things to be running along at their best again, but there was never a sense of security and always a stressful balance of risk and responsibility. It was a good problem to have.
For the last year or so I’ve mostly been busy with all the tedious work of shutting down the offices in Munich and Berlin along with the day to day paperwork involved in a company. Now I feel a sense of relief to be free of these responsibilities.
It will be a few months until I start to look seriously for what’s next. I’ll probably look for a nice stable development and management job here in Munich, and I’ll try particularly hard to find something that lets me work part time so I can pick my kids up from school.
In the meantime I’m enjoying the small sense of achievement that comes from taking care of all the little things I’ve let slip over the past few years and catching up with a bunch of tech stuff that I haven’t had time to learn in depth.
This brings the Maliit Keyboard into the QML/QtQuick2 world for Qt5, removing the use of QGraphicsView which is not really suitable for Qt5. This should also have some performance advantages and makes customization even easier.
Michael Hasselmann blogged a summary of the state of Maliit today. The recent work, along with the Wayland integration, has made Maliit more popular than ever. But we still need to line up customers to fund the ongoing development, generally while creating custom features or solutions for them.
Our work on the underlying Maliit Framework for Canonical was published upstream as we did it. We believe we’ll be able to upstream more of our Maliit Plugins work in the future.
Versions of these Maliit Plugins commits were published a few days ago in the Ubuntu Phablet project’s maliit-plugins Launchpad/Bazaar repository. It also contains commits (not by us) on maliit-plugins’ Nemo Keyboard, mostly for integration with the Ubuntu Touch platform (and its use of Android’s Surface Flinger). The recent Ubuntu Touch preview is using a version of that Nemo Keyboard, though we believe that’s meant as a temporary solution. A properly integrated Maliit Keyboard should behave significantly better.
Anyway, these commits add these features to the Maliit Keyboard:
Auto-capitalization.
Styling, such as a black underline for the current word and a red underline for a word with an error, though its up to the toolkit exactly how it shows this.
Word prediction, error correction, etc are now available when editing previously-entered words, instead of just the next word, taking into account the surrounding words.
Users can add words to the dictionary with a long press on the space key.
More settings to enable/disable auto-capitalization, auto-correction, word prediction, error correction, audio feedback and whether the word ribbon should be disabled in portrait mode.
Applications can specify text and icons for actions keys, such as Done, Go, Login, etc.
Many of these features were already in the old MeeGo Keyboard (used by the Nokia N9) which had to be dropped last year because of its libmeegotouch dependency and its need for proprietary plugins to achieve these features.
We hope to have all this in an official Maliit release soon.
Over the last few months, I have worked on Rygel‘s documentation, along with Krzesimir Nowak and Jens Georg here at Openismus. Most of that work is now finished. It’s been a great investment of time that should be of real benefit to the project.
We’ve massively improved Rygel’s (C) API documentation, which was rather bare after Rygel’s initial split into shared libraries. We had to investigate how the current plugins use the API, and sometimes improved the API in response. (The very latest API documentation improvements will be online soon, when we do a new Rygel release.)
We’ve added both simple and real-world examples, linking to them from sections in the API documentation and describing how those examples work. Those real-world examples are standalone GStreamer-0.10-based versions of the regular Rygel media engine and of its media-export server plugins, plus a GStreamer-0.10 version of the standalone renderer example.The original code for these (now using GStreamer-1.0) was in Vala, like the rest of Rygel, so we had to convert them to C. To maintain functionality, we chose to clean up the horribly-obfuscated C code generated by Vala. That took us a few frustrating weeks to finish but we got it done.
The new Rygel Integration page provides an overview of the APIs that platforms should find interesting, linking to the various documents that we’ve created during this effort. That Integration page is part of a complete overhaul of Rygel’s wiki project pages to make them more attractive and useful.
To help with maintenance of Rygel itself, we now have a Rygel Architecture page with descriptions of Rygel’s program flow in various situations, and a Rygel architecture diagram showing how the various parts of Rygel work together.
The OnlineGlom demo does not require a login. However, the code does let you set up a server that requires a login, and I noticed that a successful login for one person became a login for everybody else. So after the first login, it was as if no login was required for anybody. Yes, really. Of course, this would not do.
So I fixed that, I think, learning some things about Java Servlet sessions along the way. This text is mostly for my own reference, and so that people can tell me how wrong I am, because I’d like to know about that.
I now store the username and password (Yes, that’s not good, so keep reading), associated with the session ID, in a structure that’s associated with the ServletContext, via the javax.servlet.ServletContext.setAttribute() method. I get the ServletContext via the ServletConfig.getServletContext() method. I believe that this single instance is available to the entire web “app”, and it seems to work across my various servlets. For instance, if I login to view a regular page, the images servlet can also then provide images to show in the page. I’d really like to know if this is not the right thing to do.
However, it still stores your PostgreSQL username and password in memory, so it can use it again if you have the cookie from your last successful login. It does not store the password on disk, but that is still not good, because it could presumably still allow someone to steal all the passwords after a breakin, which would then endanger users who use the same password on other website. I cannot easily avoid this because it’s the PostgreSQL username and password that I’m using for login. PostgreSQL does store a hash rather than the plaintext password, but still requires the plaintext password to be supplied to it. I think I’ll have to generate PostgreSQL passwords and hide them behind a separate login username/password. Those passwords will still be stored in plaintext, but we won’t be storing the password entered by the user. I’d like to make this generic enough that I can use other authentication systems, such as Google’s for App Engine.
To avoid session hijacking, I made the cookie “secure”, meaning that it may only be provided via a secure protocol, such as HTTP. I believe this also means that client (javascript) code is not allowed to read it, so it can only be read by the server via HTTP(S). I did that with the javax.servlet.http.Cookie.setSecure() method, though I had to make a build change to make that available.
The login servlet now checks that it has been called via HTTPS, by using the ServletRequest.isSecure() method, and uses HTTPS when testing via mvn gwt:run. It refuses to do any authentication if HTTPS was not used, logging an error on the server.
Actually, the entire site must therefore be served via HTTPS, not just the login page, or we would violate the Same Origin Policy by mixing protocols, which the browser would rightfully complain about. At this point I noticed that most serious sites with logins now use HTTPS for their entire site. For instance, Google, Amazon, Facebook. This seems like a good simple rule, though I wonder if many projects don’t enforce it just to make debugging easier.
Glom uses PostgreSQL and doesn’t try to offer the user a choice of anything else. That’s because it does what Glom needs, there’s no need to confound the user with an incomprehensible choice, and I’ve no wish to maintain multiple sets of code. It’s hard enough keeping up with changes in PostgreSQL, though Glom’s regression tests help.
However, I played around with adding MySQL support as a build-time alternative via the –enable-mysql configure option. The basic stuff now works both in the UI and in the regression tests. Those tests can now run each self-hosting database test with all 3 backends.
This is mostly just so I could learn about MySQL, so I can reimplement it in Java for OnlineGlom. That would let me use Google’s Cloud SQL, which is based on MySQL. The main work has been figuring out how to initialize a MySQL database store on disk and then start and stop MySQL instances. It’s even more funky than with PostgreSQL. I did need an addition to libgda to support non-standard MySQL port numbers but, as usual, Vivien Malerba fixed that for me quickly. There’s also the rather huge problem that AppArmor on Ubuntu prevents us from starting MySQL with anything but the standard database data, and we can’t expect the user to go editing AppArmor config files. At least with MySQL 5.6, it should be possible to start a MySQL instance without it having no password for a few seconds, as is necessary with MySQL 5.5. I need to start and stop custom instances so I can run tests automatically.
I’ve committed it to Glom’s git master, and its in the 1.23.3 release, just in case anyone wants to improve it. There are some TODO_MySQL comments in the tests where we expect something to fail with MySQL. For instance, I have not added support for editing the MySQL users and groups. And there are likely to be problems with keeping data when changing field types, which doesn’t seem to be tested thoroughly for any backend. libgda is also missing some support for binary field types, needed for Glom’s image fields.
Over the years, various people have complained about Glom not using MySQL. Here is your chance to actually work on that, with tests to show if your work is enough.
Our Ubuntu packages for the Maliit framework and keyboard had not been updated for a while, so just before the holidays I uploaded 0.93.1 versions for Ubuntu Quantal to the Maliit PPA. I fixed various lintian warnings along the way.
We’d really like some help getting this into official Debian and Ubuntu.
Over the last couple of weeks, I’ve been playing with a Jenkins installation at jenkins.openismus.com, building some of the Openismus projects. Here are some notes about my experience.
Installation
This runs on an Amazon EC2 instance. Initial installation was surprisingly simple and well documented, though it took me a while to figure out how to use Jenkins properly. I initially used the official Ubuntu 12.10 packages for Jenkins, but they are a little old so I had to switch to using the Debian/Ubuntu packages from jenkins.org to fix a bug with the copyArtifacts plugin. The two packages seem to be structured very differently, so I had to remove all the Ubuntu Jenkins packages before installing the jenkins.org packages, to avoid a conflict.
See also the Jenkins standard security setup instructions, though I had to use the “Jenkins own user database -> Allow users to sign up” feature first, to create a user which I could then enter in to the matrix grid. I then disabled the “Allow users to sign up” checkbox.
Although Jenkins can use slave servers, and probably should, I’m doing everything on one server for now, because I’m afraid of the Amazon EC2 costs getting out of control. Luckily we don’t need to run each build more than once or twice per day to get some benefit. Later I will probably try running EC2 spot instances for the builds. Maybe that won’t be too expensive.
Git-based projects with Jenkins
You’ll need to use the pluginManager page to install the Git plugin, so that there is something other than “None” listed under “Source Code Control” when creating a job. Of course, we have to “apt-get install git” too. We must also specify a git username and email address for the “git plugin” on the configure page, to avoid “Please tell me who you are” errors in the job when Jenkins tries to locally tag the checked out git repository. Neither the configure page or the pluginManager admin pages seem to be linked from anywhere, so I had to discover them via google searches.
Be careful to specify the master branch rather than leaving that blank, or I think Jenkins will try building arbitrary branches, and maybe all of them.
You can specify a “git clean -dfx” via the “Clean after checkout” option under the “advanced” section, and you probably should so you get a truly clean build each time.
You can use the “Poll SCM” Build trigger, with the cronjob syntax, to regularly check the git repository for changes. This is not ideal, but to do it properly you’d need to add a git hook to the git repository to request a build from your jenkins server whenever there is a git commit.
Multiple branches
You can specify more than one git branch, to make Jenkins try building more than just one, but it’s hard see what branch was built when looking at the results.
Simple maven builds
For maven-based Java projects such as OnlineGlom, Jenkins is very straightforward, because maven typically downloads all the dependencies without expecting anything to be installed already, and “maven package” typically does the whole build.
Autotools builds, or similar
For autotools-based projects, you’ll need to make sure that you’ve “apt-get install”ed the project’s dependencies.
Then you must specify the configure and make (or qmake) steps in a build step.
Of course, many real-world projects will need newer versions of their dependencies. For instance, we build maliit-plugins, which depends on maliit-framework, which we develop in sync. For this, I:
Tell the maliit-framework job’s build to “make install” into a local directory, via the “–prefix=” configure option.
Use an “archive the artifacts” post-build action to store everything in that directory.
Use a “copy artifacts from another project” build step in the maliit-plugins job.
Export several variables so the build system has access to the dependency in the local prefix. For instance, (maliit uses hateful awful qtmake instead of autotools, but you’d need something similar for autotools):
You can use the “” build trigger to make Jenkins try a build whenever a dependency is built.
I have not tried this with multiple built dependencies. I imagine it could get awkard. It feels like Jenkins needs a plugin for autotools to make this simpler.
Multiple configurations
You can create “multiple configuration” jobs to try multiple ways of building your project. For instance, you might provide different sets of options to your configure script. But I couldn’t use this feature due to the spaces that it puts in the build paths. So I created separate top-level jobs for each configuration. Other people seem to do the same, maybe for the same reason.
Email notifcation
I’ve tried using Amazon’s Simple Email Service to send notification emails about build failures, but I don’t have that working yet. I’ll update this if I do.
I found some work in one of my old branches and cleaned it up, so now OnlineGlom supports image fields too.
As usual, it was far more work than seemed necessary. GWT’s Image widget is not much more than a wrapper around the HTML <img> tag, so I had to create a separate service, with the same authentication system, to serve image data and invent a URL syntax to refer to the images from the database. It is certainly easier with GTK+ code on the desktop, even when delivering the image data asynchronously. This feels like something that a web progamming system should take care of, even if this is what happens behind the scenes. I wonder if any do.
Next, I want to make sure that OnlineGlom can handle tables whose primary keys are not numeric, because we’ve been hard-coding that in a few places. Then I hope I can start the big job of supporting data editing.
I just booked my travel and hotel to visit the Ubuntu Developer Conference in Copenhagen at the end of October, along with Michael Hasselmann and Mathias Hasselmann, all of us representing Openismus.