All posts by murrayc

About murrayc

Murray Cumming is a software developer who works with Linux, C++, and Java. He runs Openismus GmbH - a small consultancy that works with open source platforms.

Simple GWTP and Objectify example with Maven

I’ve been playing with GWTP (an MVP framework for GWT) and Google’s AppEngine (via Objectify). I prefer to use the Maven build system. So I created a simple small example that does this: murrayc-gwtp-appengine-example. Improvements are very welcome. I’d like to learn how the code should be improved.

In case it’s interesting to anyone, here’s why I wanted this little example to exist:

GWT

The official GWT website’s equivalent StockWatcher example/documentation does not demonstrate best practices. It:

There is official GWT sample code that uses maven and Objectify, but it’s all mixed up with other stuff and I wanted something much simpler.

There is also an official GWT MVP example from 2010 hidden away, but it’s not in git and I didn’t even find it until I started writing this.

Objectify

Objectify’s documentation seems pretty good now that I look at it again, though I struggled at first. I think Google (and StackExchange) kept taking me to example code that used one of the various older versions of the API.

Oddly, Objectify’s documentation suggests that you use objectify-gwt with GWT. but objectify-gwt doesn’t seem to have any documentation and I didn’t seem to need it myself.

GWTP

The official GWTP website’s documentation is rather scattered, with several simple typos that make its example code snippets inconsistent with each other. The documentation seems to have been restructured from some other source. For instance, several internal links don’t take you to what is apparently intended. Some internal links take you to the page itself.  Lots of the best documentation is spread across their blog posts, but not brought together properly. They do at least show use of maven in their beginner’s GWTP example. but that example code itself is not in version control as a real project. GWTP is a nice clean API that deserves better.

Of course, it is incredibly hard to describe source code in documentation and keep that documentation and source code up to date and in sync. It needs a system. It never works if you just put the source code inline where no compiler can touch it. Although it’s nice to show source code inline (ideally taken automatically from a real source file), you also need links to the real source code so people can see the latest version.

C++ Core Guidelines: Ownership and Parameters

I’ve been catching up on the wonderful CppCon talks from 2014 and 2015. It looks like we are getting towards a consensus about how to write modern C++, though I guess we are still early in the process,  and still coming up with as many questions as answers.

When this all settles down, it might need Scott Meyers to come out of retirement from C++ to combine and update his Effective C++ books. Maybe he’d do it if a really modern C++ emerged from it all.

Talks, Guidelines, Smartpointers and Ownership

Some of the most important talks focus on when pointers, or smartpointers, have “ownership”. Three talks in particular progress towards a rather utopian vision in which (shared) object ownership is clearer in various parts of your code and in which there are no object lifetime bugs:

  • CppCon 2014: Herb Sutter “Back to the Basics! Essentials of Modern C++ Style”  (slides)
    Herb discussed some best practices, particularly around auto and smartpointers.
  • CppCon 2015: Bjarne Stroustrup: “Writing Good C++14” (slides)
    Bjarne discussed how we can agree on guidelines that static analysis tools can use, and how following the rules can let static analysis tools find more bugs. For example, by using types to indicate whether a (smart)pointer is owning. As in Herb’s 2014 talk, he suggested that we should get() the raw pointer out of a shared_ptr<> before passing it to a function that doesn’t really want to share ownership. He also introduced the GSL, which has owned<> for when you can’t use shared_ptr<> or unique_ptr<>.
  • CppCon 2015: Herb Sutter: “Better C++14 by Default” (slides)
    The next day, Herb discussed many more ways that static analysis tools could avoid several classes of bugs, and how we can break the rules down into sets called profiles. He suggested some annotations, via the C++ generalized attributes syntax, already in C++11, which could let the tools find even more problems, but showed that we can make good assumptions by default. He showed a Microsoft static analysis tool doing this already.

I like the idea of tools telling me what to do. C++ is more complicated than ever, though also better than ever, and there’s more scope than ever for people to disagree about how to use it. At some point I like to move on and get stuff done. I’m sceptical that we can ever really make our code as safe as Bjarne and Herb suggest, but I’m convinced that we could catch several new categories of problems before runtime, and I want to try.

C++ Core Guidelines, Smartpointers, and Ownership

The C++ Core Guidelines are currently a bit of a muddle. There are many rules that seem to overlap too much, often with insubstantial examples. Too often there are admonitions about what not to do rather than examples of how to do things properly. Trying to comply with one rule will sometimes violate another rule, with no clues about resolving the conflict. The github issues and commits show that conflicts are being clarified, but the whole thing seems so big and spread so thinly that I guess it will be a long time until it feels consistent and cohesive.

The advice often seems to assume that you are writing application code rather than reusable library code, though large real world systems are rarely just one or the other. Therefore there is little consideration of ABI stability. For instance, it suggests “F.7: For general use, take T* or T& arguments rather than smart pointers” and “I.11: Never transfer ownership by a raw pointer (T*)” (and more overlapping advice) but what happens when you need to change the implementation and need to share ownership of the passed instance? You can’t just change a T* to a std::shared_ptr<T> without making existing applications crash. The document suggests GSL’s owned<> in this case, though I guess it will still affect shared library symbol names for methods, and it won’t actually do any reference counting. So in a library where you know that instances get passed around a lot, and there’s a reasonable chance that ownership will be shared, it doesn’t seem unreasonable to default to std::shared_ptr<> even if that obscures the actual ownership in some parts of the code.

Furthermore, when looking at other peoples’ code, how do we distinguish the foo(const T*) method that probably doesn’t keep a copy of the pointer, from all the foo(const T*) methods that might do anything at all because their authors never heard of the guidelines?

Raw pointers as non-owners

As a start, I’m playing with this idea of not overusing smartpointers and letting the lack of a smartpointer indicate non-involvement in the ownership. For instance:

std::shared_ptr<Something> something = get_something();

// This takes a shared_ptr:
do_something_and_keep_it(something); 

// This doesn't care about ownership:
do_something_and_dont_keep_it(something.get());

I’m ready to try this out in an application such as Glom, but fearful of doing it in a library API, such as gtkmm.

Bear in mind that, In the past,

  • I have been one of the maintainers that made glibmm and gtkmm use Glib::RefPtr<>, an intrusive reference-counting smartpointer, for all our wrapper classes of reference-counted glib/GTK+ objects (not widgets). For instance, see the doxygen graph of glibmm/giomm classes deriving from Glib::ObjectBase, and there are many in gtkmm too (not the widgets).
  • I converted most of my Glom codebase to use all of its data structure classes by shared_ptr (a custom shared_ptr<> before C++11) rather than trying to decide when they can be deleted.

These have worked out very well, partly because “only use this via this smartpointer” is a nice simple rule to follow. I’ve resisted letting anyone easily get the underlying pointer out of a RefPtr, to preserve this simplicity. Now that there’s this new (at least to me) consensus, I guess we’ll actually add RefPtr::get() and RefPtr::operator*() soon.

But I still hesitate, because I don’t expect the typical developer to be careful enough until there are tools to stop misuse and I don’t expect the typical developer to use the tools even when they exist. So the price of a better experience for experts may be more ways for non-experts to make mistakes. The gtkmm API has tried to be a pure and modern C++ while also being pragmatic and obvious, but it will be hard to keep the balance in this case.

Trying out raw pointers as non-owners in Glom

I tried this out in an “ownership” branch of Glom. I first looked at some static (standalone) functions that took a std::shared_ptr<> because those were unlikely to store any state between calls. Here is the fairly small patch that changed those parameters.

I then looked at some other functions that took a shared pointer to const (std::shared_ptr<const Something>), suggesting that the implementation just wanted to look at it and then forget it. Changing one function’s parameter forces you to look at the calling functions, where you might find that the shared_ptr<> is itself a parameter that should be changed to a non-owning raw pointer. I followed this logic up the call stack for a while and ended up with a larger patch. I forget which function I tried to change first.

I like that this reduces the amount of code where I need to worry about circular references, which I would break with a std::weak_ptr<>. But I still worry enough to wish there was some way to automatically identify where these might happen.

I’m still a little uncomfortable with this mix of shared_ptr<> and raw pointers, even in purely application code, so I’m not ready to put this in git master just yet. But I think I’ll come around to the idea.

std::unique_ptr parameters to take ownership

The C++ Core Guidelines, suggest “R.32: Take a unique_ptr<widget> parameter to express that a function assumes ownership of a widget” though they currently tell you to avoid std::move() (update: fixed now) which you’d need when calling such a method with a named variable. They don’t mean  gtkmm Widget, but I do.

Watch Herb Sutter’s 2014 talk for the rationale for taking std::unique_ptr<>, but not all types, by value. It’s also mentioned in the C++ Core Guidelines “F.15: Prefer simple and conventional ways of passing information” section.

For instance:

void
SomeClass::take_something(std::unique_ptr<Something> something) {
...
}

...
auto something =
  std::make_unique<Something>("something", 2, 'a');
something->set_extra_foos(foo1, foo2);
bar.take_something(std::move(something));

// Or without the intermediate variable:
bar.take_something(
  std::make_unique<Something>("something", 2, 'a'));

// Or again without the intermediate variable:
bar.take_something(
  create_something("something", 2, 'a'));

Trying out std::unique_ptr<> parameters to take ownership in gtkmm

gtkmm has Gtk::manage(), which roughly means “let the container widget worry about deleting this child widget later”. For instance, you can new a Gtk::Button and call Gtk::manage() on that button before you add() it to a Gtk::Grid. It works well and it’s very simple. Incidentally, all it really does is enable the default behaviour for GtkWidgets – the only clever stuff about Gtk::manage() is how we change the default for gtkmm.

Passing a std::unique_ptr<> to add() could have much the same meaning, and would have the advantage of using standard C++ API and a known convention. I have a patch in bugzilla that does this and there’s been some discussion on gtkmm-list.

However, the result is sometimes hard to love. You have to move the add() to later in your code, to avoid a use after std::move(). And, for some of our API, you really do need to use the pointer right after you’ve add()ed it to a container. So you need to take a temporary “observing” (non-owning) raw pointer before the add().

Currently, the worst part of the gtkmm API for this, which has never felt
quite right, though we can blame GTK+, is this:

auto cell = std::make_unique<Gtk::CellRendererText>();
cell->property_style() = Pango::STYLE_ITALIC

auto cell_nonowning = cell.get(); // yuck
auto column = std::make_unique<Gtk::TreeViewColumn>("something", std::move(cell));
column->add_attribute(cell_nonowning->property_text(), columns.title);
column->add_attribute(cell_nonowning->property_style_set(), columns.italic);

treeview.append_column(std::move(column));

I guess we’ll add the add(std::unique_ptr<>) overloads to allow this, but not deprecate Gtk::manage() until we feel more comfortable with how they make us rearrange our code.

libxml++ 3.0 soon

Kjell Ahlstedt has done some work on a new parallel-installable version of the libxml++ API, to fix some mostly-minor but annoying things that needed ABI breaks. You can see the code in libxml++’s git master.

He’s ready to release a stable .0 version soon. So now is the time to mention anything about the API that needs to be changed, please. People too often only look at these things right after the first stable release, when it’s too late.

C++ tuple utils

I’ve recently been been writing lots of modern C++ code with variadic templates. For instance , I’ve been trying to make libsigc++ use variadic templates instead of being a mess of generated code.

I often find myself needing utility functions and type traits to manipulate tuples, but the C++ standard library still only offers std::tuple_cat(). Writing these is awkward and that often stops me from experimenting quickly.

So I’m gradually gathering this code together in a little murrayc-tuple-utils library. I’d gladly change the name if this gets any use. Really, I’m surprised that nothing like this seems to exist already, apart from as part of larger projects such as boost::hana and boost::fusion. But boost is a really awkward dependency and those are larger projects with much grander goals.

So far murrayc-tuple-utils has:

  • tuple_cdr(): Removes the first element.
  • tuple_start<N>(): Takes the first N elements.
  • tuple_end<N>(): Takes the last N elements.
  • tuple_subset<pos, len>: Takes len elements, starting at pos.
  • tuple_interlace<T1, T2>: Takes elements from each tuple, interlacing (or zipping) them together.

For each of them, there are also type traits, such as tuple_type_cdr<>::type, though these are not so necessary now that C++14 has decltype(auto) for return types.

These are just enough code to make things work enough for me when I’m in a rush. I’m sure they can be improved, and maybe this is how to get those patches and pull requests.

The project has a complete autotools build structure, with “make check” tests, Doxygen documentation building, a pkg-config .pc file, etc, so you can try it out, improve it, and add to it, without having to mess around with that stuff.

Playing with xdg-app for PrefixSuffix and Glom

xdg-app lets us package applications and their dependencies together for Linux, so a user can just download the application and run it without either the developer or the user worrying about whether the correct versions of the dependencies are on the system. The various “runtimes”, such as the GNOME runtime, get you most of the way there, so you might not need to package many extra dependencies.

I put a lot of work into developing Glom, but I could never get it in front of enough non-technical users. Although it was eventually packaged for the main Linux distros, such as Ubuntu and Fedora, those packages were almost always broken or horribly outdated. I’d eagerly fix bugs reported by users, only to wait 2 years for the fix to get into a Linux distro package.

At this point, I probably couldn’t find the time to work more on Glom even if these problems went away. However, I really want something like xdg-app to succeed so the least I could do is try it out. It was pleasantly straightforward and worked very well for me. Alexander Larsson was patient and clear whenever I needed help.

xdg-app- builder

I first tried creating an xdg-app package for PrefixSuffix, because it’s a very simple app, but one that still needs dependencies that are not in the regular xdg-app GNOME runtime, such as gtkmm, glibmm and libsigc++.

I used xdg-app-builder, which reads a JSON manifest file, which lists your application and its dependencies, telling xdg-app-builder where to get the source tarballs and how to build them. Wisely, it assumes that each dependency can be built with the standard configure/make steps, but it also has support for CMake and lets you add in dummy configure and Makefile files. xdg-app-builder’s documentation is here, though I really wish the built HTML was online so I could link to it instead.

Here is the the manifest.json file for PrefixSuffix. I can run xdg-app-builder with that manifest, like so:

xdg-app-builder --require-changes ../prefixsuffix-xdgapp manifest.json

xdg-app-builder then builds each dependency in the order of its appearance in the manifest file, installing the files in a prefix in that prefixsuffix-xdgapp folder.

You also need to specify contexts in the “finish-args” though they aren’t explicitly called contexts in the manifest file. For instance, you can give your app access to the network subsystem or the host filesystem subsystem.

Creating the manifest.json file feels a lot like creating a build.gradle file for Android apps, where we would also list the base SDK version needed, along with each version of each dependency, and what permissions the app needs (though permissions are partly requested at runtime now in Android).

Here is the far larger xdg-app-builder manifest file for Glom, which I worked on after I had PrefixSuffix working. I had to provide many more build options for the dependencies and cleanup many more installed files that I didn’t need for Glom. For instance, it builds both PostgreSQL and MySQL, as well as avahi, evince, libgda, gtksourceview, goocanvas, and various *mm C++ wrappers. I could have just installed everything but that would have made the package much larger and it doesn’t generally seem safe to install lots of unnecessary binaries and files that I wouldn’t be using. I do wish that JSON allowed comments so I could explain why I’ve used various options.

You can test the app out like so:

$ xdg-app build ../prefixsuffix-xdgapp prefixsuffix
... Use the app ...
$ exit

Or you can start a shell in the xdg-app environment and then run the app, maybe via a debugger:

$ xdg-app build ../prefixsuffix-xdgapp bash
$ prefixsuffix
... Use the app ...
$ exit

Creating or updating an xdg-app repository

xdg-app can install files from online repositories. You can put your built app into a repository like so:

$ xdg-app build-export --gpg-sign="murrayc@murrayc.com" /repos/prefixsuffix ../prefixsuffix-xdgapp
$ xdg-app repo-update /repos/prefixsuffix

You can then copy that directory to a website, so it is available via http(s). You’ll want to make your GPG public key available too, so that xdg-app can check that the packages were really signed by you.

Installing with xdg-app

I uploaded the resulting xdg-app repository for PrefixSuffix to the website, so you should be able to install it like so:

$ wget https://murraycu.github.io/prefixsuffix/keys/prefixsuffix.gpg
$ xdg-app add-remote --user --gpg-import=prefixsuffix.gpg prefixsuffix https://murraycu.github.io/prefixsuffix/repo/
$ xdg-app install-app --user prefixsuffix io.github.murraycu.PrefixSuffix

I imagine that there will be a user interface for this in the future.

Then you can then run it like so, though it will also be available via your desktop menus like a regular application.

$ xdg-app run io.github.murraycu.PrefixSuffix

Here are similar instructions for installing my xdg-app Glom package.

I won’t promise to keep these packages updated, but I probably will if there is demand, and I’ll try to keep up to date on developments with xdg-app.

Galaxy Zoo app for iPhone: Learning iOS Development

My Android app for Galaxy Zoo is doing pretty well. Creating it gave me a chance to learn Android development quite thoroughly. So I thought I’d do the same for iOS, with Objective-C. The Galaxy Zoo iPhone app is now mostly done, though it’s only available via TestFlight for now. Please send me your Apple ID (usually your email address) if you’d like to try it.

Over the years, I didn’t much believe the reports of Apple’s APIs, tools, and documentation being so wonderful. But I had hoped things would be better.  Android’s API and documentation can be frustrating, but developing for iOS feels worse, with rather primitive tools. It’s still doable, but I can’t think of anything about it that I prefer to Android development.

This is a rough collection of rants, so I reserve the right to revise my opinion when I have gained more complete understanding.

Objective-C

Learning Objective-C was not too difficult, though I still find its syntax and terminology arbitrarily different and often obstructive. Watching all of Stanford’s “Developing iOS 7 Apps for iPhone and iPad” course (on iTunes here too) helped me gradually get used to Objective C, XCode, and Apple’s *Kit APIs.

I chose to start with Objective-C rather than Swift, just so I’d have a chance to learn it and to avoid any problems caused by Swift still being fairly new. I plan to convert my code to Swift soon as its gradually becoming the default language for iOS development. Unfortunately it looks like Swift inherits most of Objective-C’s weird features.

Whereas in Java or C++, you would call a method on an instance like this:

SomeResult *someresult = something.somemethod(arg1, arg2);

in Objective-C, you’d send a message (not call a method) to an instance, like this:

SomeResult *someresult = [something somemethod:arg1 witharg2:arg2];

And if something is nil (null) then, unlike C++ or Java, there will be no runtime crash or exception – you’ll just get a null result. So you don’t need to check for nulls unless you really need the method to be called, though this leads to all kinds of extra implicit paths through your code. I secretly like this, but it is of course madness.

Users of C++ or Java will also be shocked at how it’s normal in Objective-C to add API to already-defined classes instead of providing the API via derived classes or helper methods. For instance, when using the (not from Apple) AFNetworking library, you can #import “UIImageView+AFNetworking.h” to add methods (messages) such as setImageWithURLRequest to the regular UIImageView class.

XCode

XCode is overly simple when it comes to basic code editing and debugging. I was amazed that I couldn’t just click on an exception’s backtrace to go to the relevant line of code. And I really missed the refactoring features in Android Studio or Eclipse.

XCode is not intuitive, forcing you to learn where to click and when, instead of learning how some structured code works. For instance, you need to know about the magic Ctrl-drag to access UI controls in your code, to respond to UI events from those UI controls.  Embedding your views into tab bars and navigation views is similarly obscure.

Auto Layout

Nothing has frustrated me more than Auto Layout in iOS. Amazingly, iOS had no real automatic layout until iOS 6 in 2012, and I don’t think it was really usable until iOS 7 in 2013. And developers still needed separate layouts (Storyboards) for iPhone and iPad versions of their apps until iOS 8 in 2014.

Auto layout seems to be a matter of playing with:

  • Constraints: You can add constraints, for instance to say that a child view (widget, control) should be the same width as another view , or that it should have a certain size of margin between itself and the leading (left) edge of its container. The XCode UI makes it really hard to see what constraints exist and the XML is obfuscated, so this is really difficult. You still need to specify positions and width/height for all your views, even if they would be changed by the auto layout, so you inevitably have to keep dealing with XCode’s warnings about the “frame” not being the same as it would be at runtime.
  • Content Hugging (vertical or horizontal): A high value stops the view from being made larger, instead “hugging” its content (such as text). I dislike this terminology particularly.
  • Compression Resistance (vertical or horizontal): A high value stops the view from being made smaller.

Content hugging and compression resistance are numeric values, so your layout tends to become littered with arbitrary values such as 100, 250, or 1000, as you try to make things work, with very little sense of overview of the whole layout.

I much prefer the automatic layout systems used in GTK+ and Android, where it’s mostly just a matter of using parent layout containers and telling them how to arrange their children. Android’s layout system is often cryptic, compared to GTK+’s simplicity, but the layout XML is readable so it’s fairly easy to try things out. Stackoverflow answers about iOS autolayout are all about where you should click in various versions of XCode, but the answers about Android layouts can just show XML samples.

Android child fragments can be difficult, but I eventually worked things out. In iOS/XCode still don’t have a satisfactory way to auto layout child containers that contain child views.

CoreData

CoreData seems to be a way to access a SQL-like database using generated classes. For instance. I find the manual code generation quite awkward, but I suppose it works. Most of my difficulties were to do with using CoreData with RestKit. But I eventually found its saveToPersistentStore() method to actually store the data reliably.

I’ve also missed having an equivalent for Android’s SyncAdapter, though I have not yet found the same UI performance problems that made it so necessary on Android.

Beta testing

I can understand Apple’s insistence on reviewing all app uploads to their app store. But it’s annoying that every version of private beta tests have to be approved too.And

To let someone beta test your app via TestFlight, you need to know their Apple ID and add it manually in the iTunes Connect website. In comparison, with Google’s Open Beta Testing for Android, you can just publish a URL that lets people opt in themselves.

And as far as I can tell, users don’t get any notification when you release a new beta version. So your old testers generally don’t even install the new version until you’ve bugged them individually to do so.

Modern C++: Variadic template parameters and tuples

Variadic Templates

C++11 lets us define variadic templates, taking any amount of parameters, of any type, instead of just a specific number of parameters. Then, when we use the template, we can not only specify the types, we can specify an arbitrary amount of different types. For instance,

template <class... T_values>
class Base {
public
  virtual void something(T_values... values) = 0;
};

class Derived1 : public Base<int, short, double> {
public:
  void something(int a, short b, double c) override;
};

class Derived2 : public Base<std::string, char> {
public:
  void something(std::string a, char b) override;
};

This is useful in extremely generic code such as libsigc++. We are gradually using this in libsigc++ and in glibmm, along with template aliases, to replace code that was previously generated by perl and python scripts to produce multiple versions of each C++ template, with various numbers of parameters. In the meantime, I’ve been playing with variadic templates in a separate experimental project and this is what I’ve learned along the way.

Parameter Packs

The “class… T_values” there is called a template parameter pack. If you are just templating a function then it’s called a function parameter pack:

class Thing {
public:
  template <class... T_values>
  void something(T_values... values) = 0;
};

Expanding a Parameter Pack

The “T… values” in that method signature is us expanding (or unpacking) the parameter pack in a function parameter list. You can use the … in various ways in the function parameter list. For instance, to receive the types as const references:

template <class... T_values>
class Thing {
public:
  void something(const T_values&... values) = 0;
};

In your template, you can then call another method with that parameter pack, by expanding (or unpacking) it into the function argument list. For instance, with “values…”:

template <class... T_values>
class Thing {
public:
  void something(T_values... values) {
    something_else(values...);
  };

You can write more complex patterns to change how the parameter pack is expanded. For instance:

template <class... T_values>
class Thing {
public:
  void something(T_values... values) {
    something_else((values + 1)...);
  };

  void other_thing(T_values... values) {
    something_else(const_cast<T>(values)...);
  };

Storing a Parameter Pack in a std::tuple

However, if you want to keep the parameter values around and use them at some later time, you’ll need to store them in a std::tuple. I think this is why std::tuple exists. For instance:

template <class... T_values>
class Thing {
public:
  void something(T_values... values) {
    tuple = std::tuple<T_values...>(values...);
  }

private:
  std::tuple<T_values..> tuple_;
};

Expanding a std::tuple

Then you have another problem. You probably want to call some method with those values. But now you have a tuple instead of a parameter pack. Trick with std::index_sequence<> and a helper method lets you call a normal method (that takes normal parameters), passing a tuple that holds those parameter values:

template <class... T_values>
class Thing {
public:
  void something(T_values... values) {
    tuple_ = std::tuple<T_values...>(values...);
  }

  void do_something_with_values() {
     call_yadda_with_tuple(tuple_,
      std::index_sequence_for<T_value...>())
  }

  void yadda(T... values);

private:
  //The helper method.
  template<std::size_t... Is>
  void call_yadda_with_tuple(const std::tuple<T_values...>& tuple,
    std::index_sequence<Is...>) {
    yadda(std::get<Is>(tuple)...);
  }

  std::tuple<T_values...> tuple_;
};

Unfortunately, this does clutter up your code. I haven’t yet managed to write a simple generic call_function_with_tuple(f, tuple) helper method. I hope it is possible.

Parameter packs are part of the C++ language. std::tuple<> and std::index_sequence<> are part of the C++ standard library, apparently added specifically for use with parameter packs. I can see the sense in keeping the language as simple as possible, as long as you can provide what you need via library code in that language. But the end result is not very attractive in this case. Luckily, hopefully, this isn’t something you’ll need to use much anyway.

At this point, any reader who already doesn’t like C++’s complexity will like it even less. Showing them a related thousand-line g++ compilation error should turn them away for good (clang++’s compilation errors are much clearer). But if you really like compile-time type-safety, and if you really like to avoid copy/pasted code, you might like that this is at all possible.

It would be understandable for coding guidelines to discourage the use of variadic templates except in special cases, until people are more familiar with them.

Manipulating Tuples

Of course, you might need to call methods with just some of the parameters from the parameter pack, or some combination of parameter packs. Once you have the parameters in a std::tuple, you can manipulate that tuple with some more template cleverness. For instance:

  • std::tuple_cat() concatenates two tuples into one.
  • But I recently needed a tuple_cdr() to remove the first item from the tuple, leaving me the rest.
  • A tuple_car() would just do std::get(tuple) to get the first time, so nobody bothers to write one.
  • I even needed a tuple_interlace() to interlace two tuples together, turning a, b and c, d into a, c, b, d.

It all starts to feel very lispy. Luckily there are lots of people on StackOverflow who enjoy discussing how to implement these. I feel there should be more functions like std::tuple_cat() in the standard C++ library, or maybe in some open source library.

Once you have your new tuple, you’ll probably need to use std::make_index_sequence<> instead of std::index_sequence_for<>, to call your call_*_with_tuple() helper method, like so:

void do_something_more() {
  const auto combined_tuple = std::tuple_cat(tuple1_, tuple2_);
  constexpr auto tuple_size =
      std::tuple_size<decltype(combined_tuple)>::value;
  call_yadda_with_tuple(tuple,
    std::make_index_sequence<tuple_size>());}

glibmm: Deprecated Glib::Threads

As of glibmm 2.47 (currently unstable), we have deprecated the glibmm threads API, which wrapped the glib threads API. That’s because C++11 finally has its own standard threading API, and in C++14 its complete enough to replace all of Glib::Threads.

Here’s what replaces what:

We’ve replaced use of Glib::Threads in our example code and tests, such as the example in our multithreading chapter in the gtkmm book.

This is quite a relief because we never wanted to be in the concurrency API business anyway, and having a standard C++ concurrency API makes it far easier to write cross-platform code. Now if we could just have a standard C++ network IO API and maybe a file structure API, that would be nice.

 

Coursera/Stanford course: Algorithms: Design and Analysis, Part 2

A few weeks ago I mentioned completing Part 1 of the online Coursera/Stanford “Algorithms: Design and Analysis” course. Part 2 of Algorithms: Design and Analysis isn’t due to start again until next year, but I didn’t want to wait, so I enrolled in the archived version of the course to watch the videos and do the assignments. I should be ready to just reuse my work when Part 2 starts again for real.

Part 2 was where things got really interesting. The assignments required implementing these algorithms, though the course covered others too:

  • A Greedy Algorithm for job scheduling.
  • Prim’s and Kruskal’s minimum spanning tree algorithms. (Both O(m log n) but Prims does better on dense graphs, with more edges.)
  • Modifying a minimum spanning tree to identify clusters.
  • The Knapsack problem (Dynamic Programming – both bottom-up and recursive).
  • Shortest Path with the Belmann-Ford SSSP (Single-Source Shortest Path) algorithm (O(mn) and works with negative paths, but fails with negative cycles) as an alternative to Dijkstra’s Shortest Path algorithm (O(m log n) and works only with positive paths).
  • All Pairs Shortest Path with the Floyd Warshall algorithm (dynamic programming) (O(n3) and works with negative paths, though it fails when it detects negative cycles). Best for dense graphs.
  • All Pairs Shortest path with Johnson’s algorithm, via one call to Belmann-Ford on a modified graph and repeated calls to Dijkstra on a reweighted graph. (O(mn log n) and works with negative paths, but fails with negative cycles.) Best for sparse graphs.
  • A dynamic programming algorithm for the Traveling Salesman Problem.
  • Local search with the 2Sat problem, using Papadimitriou’s Algorithm.

I particularly enjoyed exploring “dynamic programming”, which is really just avoiding unnecessary repeated work after you’ve identified the appropriate sub-problems. It’s identifying the sub problems that is really hard. I enjoyed playing with bottom-up dynamic programming (filling in an array as you go, often discarding the n-2th set of results as you go), and top-down dynamic programming, also known as memoization (usually recursing into sub problems and ideally not doing as many sub problems as you’d do working bottom up).

While implementing a dynamic programming solution for the Traveling Salesman Problem, I learned about Gosper’s Hack for iterating over subsets. It’s now a personal favorite in my toolbox.

As with part 1 of the course, I am not allowed to publish the code of my homework solutions. But I did create a public version of the knapsack problem for solving the Make Change problem without a canonical currency (not a real world set of coins), using dynamic programming, though you shouldn’t use that as a first way to understand the classic knapsack problem. I also implemented a simple greedy algorithm for the Make Change problem with a canonical currency (real world set of coins).

The Make Change problem was interesting because I’ve read that people can and should learn to recognize NP-Complete problems, such as the traveling salesman problem. However, it is not obvious which sets of coins would cause the Make Change problem to be solvable with a greedy algorithm and which would need dynamic programming, though it might at first seem like a minor detail. (I haven’t actually read that paper yet.)

I’ve also been reading through Steven Skiena’s The Algorithm Design Manual book, which I can highly recommend. It’s more practical and enjoyable than the classic Introduction to Algorithms book by Cormen et al.

Update: I did part 2 for real when it started again. Here is my certificate:

 

gtkmm 3.18 and glibmm 2.46

A couple of days ago I made the usual bunch of *mm releases for GNOME 3.18, including glibmm 2.46 and gtkmm 3.18, wrapping glib and GTK+ for C++. This adds the usual collection of new API from glib and GTK+, but the big change is the use of C++ 11.

Until C++11 is the default for g++, you’ll probably want to use the AX_CXX_COMPILE_STDCXX_11() autoconf macro. For instance, I used this to use C++11 in PrefixSuffix. Most people won’t need to make any other changes to their code or to their build. You won’t notice any change unless you care about using C++11 features.

Using C++11 in gtkmm and friends has been the most fun I’ve had with gtkmm since I had to delve deep into the GObject lifecycle back during gtkmm 1.2. I even made actual code changes to libsigc++, which I’m usually afraid to touch even though I’m the official maintainer.

Learning in general about the deep implications of  C++11’s new features reminded me how much I enjoyed learning about C++ at the beginning. It’s once again an interesting time for C++.