I’ve recently wanted to demonstrate the relevance of the .Net ReaderWriterLock[Slim] synchronization primitives.
It’s good to hear from the vendor that it’s better, faster, stronger, but when you can it’s always good to evaluate it yourself; not that I don’t trust vendors, but because I like to have hard numbers, particularly when I assert something that can be critical for my participants’ developments.
So I’ve built a small, simple and I hope relevant benchmark to measure the performance impact of ReaderWriterLock[Slim] compared to the naive and uniform use of a Monitor using the C# lock construct.
I wanted to check these two things:
that the RW locks behave as advertised,
what is the profile of the gain function.
In this article I’ll explain the rationales behind the benchmark, how I’ve implemented it and finally present the results.
First a warning, this is a difficult article which goes really deep inside the .Net machinery so if you don’t get it the first time (or even the second or third time…) don’t worry and come back later.
For a training session I’ve taught at the end of last year I wanted to demonstrate some subtleties of multi-threading, and more specifically some memory visibility issues that should cause a program to hang.
So I developed a small sample that I expected would be showing the issue, but instead of hanging as expected the program completed!
After manipulating the program further I obtained the behavior I wanted, the program was hanging, but it still didn’t explained why it managed to complete with my original version.
I suspected some JITter optimizations, and indeed it was the case, but I needed more information to completely explain this strange behavior.
As often, the StackOverflow platform was of great help; if you’re curious you can have a look at the original SO thread.
In this article I’ll “build” and explain the issue step by step, trying to make it more understandable than the SO thread which is indeed quite dry.
This is a small article about an issue I had recently trying to save some big documents represented as .Net objects in MongoDB using the MongoDB .Net driver.
While saving a “relatively” big document I’ve received the following exception:
System.IO.FileFormatException: Size 32325140 is larger than MaxDocumentSize 16777216.
at MongoDB.Bson.IO.BsonBinaryWriter.BackpatchSize() in c:\projects\mongo-csharp-driver\MongoDB.Bson\IO\BsonBinaryWriter.cs:line 697
at MongoDB.Bson.IO.BsonBinaryWriter.WriteEndArray() in c:\projects\mongo-csharp-driver\MongoDB.Bson\IO\BsonBinaryWriter.cs:line 294
at MongoDB.Bson.Serialization.Serializers.EnumerableSerializerBase`1.Serialize(BsonWriter bsonWriter, Type nominalType, Object value, IBsonSerializationOptions options) in c:\projects\mongo-csharp-driver\MongoDB.Bson\Serialization\Serializers\EnumerableSerializerBase.cs:line 408
at MongoDB.Bson.Serialization.BsonClassMapSerializer.SerializeMember(BsonWriter bsonWriter, Object obj, BsonMemberMap memberMap) in c:\projects\mongo-csharp-driver\MongoDB.Bson\Serialization\Serializers\BsonClassMapSerializer.cs:line 684
at MongoDB.Bson.Serialization.BsonClassMapSerializer.Serialize(BsonWriter bsonWriter, Type nominalType, Object value, IBsonSerializationOptions options) in c:\projects\mongo-csharp-driver\MongoDB.Bson\Serialization\Serializers\BsonClassMapSerializer.cs:line 432
at MongoDB.Driver.Internal.MongoInsertMessage.AddDocument(BsonBuffer buffer, Type nominalType, Object document) in c:\projects\mongo-csharp-driver\MongoDB.Driver\Communication\Messages\MongoInsertMessage.cs:line 53
at MongoDB.Driver.Operations.InsertOperation.Execute(MongoConnection connection) in c:\projects\mongo-csharp-driver\MongoDB.Driver\Operations\InsertOperation.cs:line 97
at MongoDB.Driver.MongoCollection.InsertBatch(Type nominalType, IEnumerable documents, MongoInsertOptions options) in c:\projects\mongo-csharp-driver\MongoDB.Driver\MongoCollection.cs:line 1149
at MongoDB.Driver.MongoCollection.Insert(Type nominalType, Object document, MongoInsertOptions options) in c:\projects\mongo-csharp-driver\MongoDB.Driver\MongoCollection.cs:line 1004
at MongoDB.Driver.MongoCollection.Save(Type nominalType, Object document, MongoInsertOptions options) in c:\projects\mongo-csharp-driver\MongoDB.Driver\MongoCollection.cs:line 1426
Well the message is clear: seems like I’ve exceeded the MongoDB max document size threshold which is 16MB, fair enough this is quite a sane design decision.
First I’ll explain why I had this issue, then how I’ve solved it.
As you may know event handlers are a common source of memory leaks caused by the persistence of objects that are not used anymore, and you may think should have been collected, but are not, and for good reason.
In this (hopefully) short article, I’ll present the issue with event handlers in the context of the .Net framework, then I’ll show you how you can implement the standard solution to this issue, the weak event pattern, in two ways, either using:
the “legacy” (well, before .Net 4.5, so not that old) approach which is quite cumbersome to implement
the new approach provided by the .Net 4.5 framework which is as simple as it can be
Recently I’ve worked with a web API, the Cometdocs API, in order to use their converter of documents, particularly for automating some conversions from PDF documents to Excel spreadsheets for data extraction.
I wanted to use this API from my two favorite development platforms: Java and .Net/C#, so I needed to build what is called a language binding, i.e. a small library that acts as a proxy between the application code and the web API.
The development of these two bindings was really interesting from a technical point of view, and I’ve learned a bunch of things during the process.
I’d like to share the interesting stuff with you, it should be of interest even if you don’t have any plan for interacting with a web API because all the technologies and techniques I’ve used (the HTTP protocol, JSON data binding, SSL/TLS…) are applicable to other types of developments.
It seemed technically feasible because Python has a remarkable tool to interact with native code: the ctypes module.
The only issue is that ctypes only supports C interfaces not C++ classes so in this case it can’t directly use the YahooAPIWrapper class.
In fact it’s a minor issue because this kind of situation is well known and a well documented pattern exists to circumvent it: building a C wrapper around the C++ API.
This looks a little crazy because you now have 2 layers between the Python client code and the C# Yahoo API:
Python -> C wrapper -> C++/CLI wrapper -> C# API
So, while I don’t think this layering could have any usefulness in real-life, this was a challenging and interesting question.
Looks simple no? Well, as you know when you start to pile up heterogeneous layersunexpected issues can appear and this is exactly what happened there, and it has revealed one that is worth talking about.
So keep reading!
Today if you ever need to consume a web API, or produce one, chances are you’ll need to use JSON.
And there is good reasons why JSON has become so prevalent in the web world:
secondly JSON is less verbose than XML (see my other article on the subject) and can be used for most scenarios where XML was historically used
So whatever the language and platform you’ll use you’ll need a strong JSON parsing component.
In the Java world there is at least two good candidates: Gson and Jackson.
In this article I’ll illustrate how to use Gson: I’ll start with a (not so) simple use case that just works, then show how you can handle less standard situations, like naming discrepancies, data represented in a format not handled natively by Gson or types preservation.
More and more JSON is becoming the data interchange format of the web and even starts to leak outside of this world, replacing XML wherever it can, and there is really good reasons for that.
But often people are driven towards JSON for other reasons, not necessarily bad reasons, but based on more anecdotal facts, like the so-called verbosity of XML.
Indeed this is the argument you’ll hear the most often, e.g. just have a look at this nice comparison of the two formats: the first cons is of course “verbosity“.
And it’s a factual argument: the size gains can be important if your values are small, typically for representing business objects like customers because the markup overhead (all the closing tags) will become important relatively to the carried information (e.g. the names and zip codes of your customers).
But you rarely send big chunk of data in a raw text format as XML or JSON, because nowadays servers and clients (e.g. web browsers) supports live gzipping of the workloads, and use it transparently.
So the size advantage of JSON over XML should reduce because GZIP knows how to factorize redundant information like markups.
At least this seems a reasonable speculation, but while intuition is good hard numbers are better to be definitely convinced and to have a numerical idea of the impact.
So I’ve written a small Javabenchmark that I’ll present, along with its results, in this article.