(or you get what you ordered)
I had a major clash with a nasty bug (or so I thought) today:
For a WPF application I had to load a image from some byte-array (loaded from a database) and show it. The code is not so difficult:
private static BitmapImage LoadImage(byte[] imageData) { if (imageData == null || imageData.Length == 0) return null; var image = new BitmapImage(); using (var mem = new MemoryStream(imageData)) { mem.Position = 0; image.BeginInit(); image.CreateOptions = BitmapCreateOptions.PreservePixelFormat; image.CacheOption = BitmapCacheOption.OnLoad; image.UriSource = null; image.StreamSource = mem; image.EndInit(); } image.Freeze(); return image; }c
It’s rather simple: wrap the data into a memory-stream and use this to initialize a BitmapImage that gets returned.
So far so good …
This is done in a ViewModel that exposes the loaded Image – this is then bound to the Source-Property of a Image on the WPF-page and here the trouble started…
Letting this run and playing a bit I observed the memory-consumption of my app go from 40MB up to 500MB and more … WTF?
Trying to use the inbuilt Memory profiler (VS2010) is a pain but the ANTS-memory profiler pointed me to “unmanaged code” … WTF?
Ok – so I went and downloaded a trial-version of the .net memory profiler (can handle unmanaged code) and voila:
Indeed seems like a problem in BitmapImage.EndInit()…
Ok – time to fire up Bing (or Google or whatever you want) – voila: seems like a “known bug” with a lot of comments and StackOverflow questions (for example: this) (ok – some sites claimed that this will be fixed in .net 3.5 SP 1 – and I’m using .net 4.0 – but seems to be the reasonable cause of it – doesn’t it?)
So I went and tried every fix proposed to no avail – but sure enough if I didn’t bind or load the bitmap the memory did behave…
First glimpse of the solution…
I did a lot of testing / trial and so I finally saw that the memory did not leak without bounds but always stayed below 600MB and on occasion got freed up quite a bit – this was the first clue – maybe it’s not bug but why does data coming from a 1MB jpg causes a >100MB raise … WAIT
What was it? 1 MB jpg? … look at it and finally see the “Pudels Kern”: the jpg has a resolution of about 10,000 x 4,000 pixel … wow – and this translates (unzips) into >100 MB as a simple calculation yields … DOH
So that’s it – no bug at all but some really bad data (those files where uploaded by users getting them by exporting some AutoCAD Images ….)
The final question:
Why did I forget the first rule “First check the user(-data)”