Row filtering in the ORM

For a while now we’ve had an unwanted behavior in our Solution Engine product. The larger the on-device database got, the longer the app took to load. To the point that some devices in the field were taking nearly 5 minutes to boot (up from roughly 1 minute under normal circumstances). This morning we decided to go figure out what was causing it.

First, we pulled a database file from a device that is slow to boot. It turns out that the database was largely empty except to about 50k rows in a log table where we record general boot/run information on the device for diagnostics.

At startup the logging service pulls the last hour of log information and outputs it to the console, which has proven to be very helpful in diagnosing bad behaviors and crashes. Looking at the code that gets that last hour of data, we saw the following:

var lastHourEntries = m_store.Select<SFTraceEntry>(a => a.TimeStamp >= selectFromTime);

Now let’s look at this call in the context of having 50k rows in the table. What it effectively says is “Retrieve every row from the SFTraceEntry Table, hydrate a SFTraceEntry class for each row, then walk through that entire list checking the TimeStamp field. If the TimeStamp is less that an hour old, then copy that item to a new list and when you’re done, return the filtered list.” Ouch. This falls into the category of “using a tool wrong”. The ORM supports FilterConditions that, depending on the backing store, will attempt to decipher into a SQL statement, index walk or something more efficient than “return all rows”. In this case, the change was as simple as this:

var dateFilter = new FilterCondition("TimeStamp", selectFromTime, FilterCondition.FilterOperator.GreaterThan);
var lastHourEntries = m_store.Select<SFTraceEntry>(new FilterCondition[] { dateFilter });

Getting Mono Process Info from a Mono App

Since a large amount of the work I tend to do is for embedded devices, and since I don’t like to have to visit deployed devices to restart an app whenever it may crash, a pretty common pattern I use is to create a watchdog application that periodically checks to see if the actual application is running and start it if it’s not.  It a bit more complex than that because I typically have support for intentional shutdowns and I like to log all restarts for diagnostics, but the general premise is pretty simple.  The app should always be running.  If it’s not, start it again.

In Mono (under Linux anyway) that task turns out to be a bit of a challenge.  Process.GetProcesses doesn’t work because rather than giving the actuall Application name like the .NET Framework does under Windows, Mono simply returns a Process with a ProcessName of “mono-sgen” for all of the Mono apps running.  I can’t differentiate between the Watchdog app, the target app and any other app that may or may not be running.

I ended up creating a new class I called LinuxProcess (for lack of a better name).  Ideally it would be a Process derivative, or even rolled back into the Mono source, but for now it’s stand-alone and feature-limited to what I needed for a Watchdog.  Full source is below (sorry about the length, but I prefer this over a zip, and it’s searchable and indexable).

using System;
using System.IO;
using System.Linq;

using Output = System.Console;
using System.Collections.Generic;

namespace System.Diagnostics
{
	public class LinuxProcess
	{
		private Process m_process;

        private LinuxProcess(string fileName)
        {
            m_process = Process.Start(fileName);
        }

		private LinuxProcess(int pid)
		{
			Id = pid;
			m_process = Process.GetProcessById(pid);

			if (m_process == null)
			{
				Output.WriteLine("GetProcessById returned null");
			}
		}

        public static LinuxProcess Start(string fileName)
        {
            return new LinuxProcess(fileName);
        }

        public bool HasExited
        {
            get { return m_process.HasExited; }
        }

		public void Kill()
		{
			m_process.Kill();
		}

		public static LinuxProcess[] GetProcessesByName(string processName)
		{
			return GetProcesses().Where(p => p.ProcessName == processName).ToArray();
		}

		public static LinuxProcess[] GetProcesses()
		{
			var list = new List<LinuxProcess>();

			foreach (var path in Directory.GetDirectories("/proc")) 
			{
				var d = Path.GetFileName(path);
				int pid;

				if (!int.TryParse(d, out pid))
				{
					continue;
				}
					
				// stat
				var stat = GetStat(pid);
				if (stat == null) continue;

				var proc = new LinuxProcess(stat.PID);
				proc.ProcessState = stat.State;

				// look for mono-specific processes
				if (stat.FileName == "(mono)")
				{
					// TODO: handle command-line args to the Mono app
					var cmdline = GetCommandLine(stat.PID);

					// cmdline[0] == path to mono
					// cmdline[1] == mono app
					// cmdline[1+n] == mono app args
					proc.ProcessName = Path.GetFileName(cmdline[1]);
				}
				else
				{
					// trim out the parens
					proc.ProcessName = stat.FileName.Trim(new char[] { '(', ')' });
				}

				list.Add(proc);
			}

			return list.ToArray();
		}

		private static Stat GetStat(int pid)
		{
			try
			{
				var statDir = string.Format("/proc/{0}/stat", pid);
				if (!File.Exists(statDir))
					return null;

				var proc = new LinuxProcess(pid);

				using (var reader = File.OpenText(statDir))
				{
					var line = reader.ReadToEnd();
					return new Stat(line);
				}
			}
			catch (Exception ex)
			{
				Output.WriteLine("Stat Exception: " + ex.Message);
				return null;
			}
		}

		private static string[] GetCommandLine(int pid)
		{
			// The command line arguments appear in this file as a set of null-separated strings, with a further null byte after the last string. 
			using (var reader = File.OpenText(string.Format("/proc/{0}/cmdline", pid)))
			{
				string contents = reader.ReadToEnd();
				var args = contents.Split(new char[] { '\0' }, StringSplitOptions.RemoveEmptyEntries);
				return args;
			}
		}

		public int Id { get; private set; }
		public string ProcessName { get; private set; }

		public ProcessState ProcessState { get; private set; }
	}

	internal class Stat
	{
		internal Stat(string procLine)
		{
			try
			{
				var items = procLine.Split(new char[] { ' ' }, StringSplitOptions.None);

				PID = Convert.ToInt16(items[0]);
				FileName = items[1];

				switch (items[2][0])
				{
					case 'R':
						State = ProcessState.Running;
						break;
					case 'S':
						State = ProcessState.InterruptableWait;
						break;
					case 'D':
						State = ProcessState.UninterruptableDiskWait;
						break;
					case 'Z':
						State = ProcessState.Zombie;
						break;
					case 'T':
						State = ProcessState.Traced;
						break;
					case 'W':
						State = ProcessState.Paging;
						break;
				}
			}
			catch (Exception ex)
			{
				Output.WriteLine("Stat parse exception: " + ex.Message);
			}
		}

		public int PID { get; private set; }
		public string FileName { get; private set; }
		public ProcessState State { get; private set; }
	}

	public enum ProcessState
	{
		Running, // R
		InterruptableWait, // S
		UninterruptableDiskWait, // D
		Zombie, // Z
		Traced, // T
		Paging // W
	}
}

OpenNETCF ORM Updates: Dream Factory and Azure Tables

We’ve been busy lately.  Very, very busy with lots of IoT work.  A significant amount of that work has been using the Dream Factory DSP for cloud storage, and as such we’ve done a lot of work to make the Dream Factory implementation of the OpenNETCF ORM more solid and reliable (as well as a pretty robust, stand-along .NET SDK for the Dream Factory DSP as a byproduct) .  It also shook out a few more bugs and added a few more features to the ORM core itself.

I’ve pushed a set of code updates (though not an official release yet) up to the ORM Codeplex project that includes these changes, plus an older Azure Table Service implementation I had been working on a while back in case anyone is interested and wanted to play with it, use it or extend it.  The interesting thing about the Azure implementation is that it includes an Azure Table Service SDK that is Compact Framework-compatible.

As always, feel free to provide feedback, suggestions, patches or whatever over on the project site.

Diskprep Availability

Diskprep.exe is a useful tool for making a bootable USB Disk with an OS but recently it seems to have disappeared from Microsoft’s downloads. I can’t say if it’s another one of those subtle hints on the future of Windows CE, an oversight due to the lack of resources dedicated to Windows CE, or just a simple mistake that will get corrected shortly.  Regardless of the cause, there are people who still find the tool useful, so I’m providing a download mirror of the tool here.

MJPEG (and other camera work)

Back in 2009 I was doing a fair bit of work for some customers in the security field.  I ended up doing some proof-of-concept stuff and ended up with some code that, while not groundbreaking, is at least might be useful to others.  It’s really too small to bother starting a Codeplex project for it, unless I get some pull requests, in which case I’ll turnn it into a full project.  In the meantime feel free to Download the source.

New ORM Release: v1.0.14007

I’ve finally gotten around to wrapping up all of the changes I’ve made in the last year (has it really been that long since the last release?) to the OpenNETCF ORM library.  The changes have always been availble in the change set browser, but I actually have them as binary and source downloads now.  I probably should find the time to create a NuGet package for it (and IoC) now.

Using Jenkins to Build from Git without the Git plug-in

A few months ago we decided to upgrade our source control provider and moved everything over to Visual Studio Online.  It’s been working great for source control, though getting used to using Git instead of the TFS source control is a bit of work.  For a few reasons we’re not using the build features of Visual Studi Online, but instead are using a Jenkins build server.  Jenkins is really nice and can do just about anything you could want, which is a big plus.  The only down side is that it’s all in Java.  Why is that a down side, you may wonder?  Well if things get broken, you’re in a pickle.

We were running all well and good for over a month.  Nightly builds were running.  Version umbers were auto-incrementing. Releases for Windows and Mono were getting auto-generated and getting FTPed up to the public release directory.  Things were great.  Until about a week before Christmas, when an update for the git plug-in was released.  The Git plug-in is what allows you to configure Jenkins to easily connect to a Git server and pull your source code.  Well the plug-in update broke the ability to get code from a Git server on Windows.  Now Jenkins has a rollback feature, and had I understood what the failure actually was (it wasn’t obvious that it was a plug-in failure) then I could have rolled back and probably been fine.  But I didn’t know.  And in my effort to “fix” things, I wiped out the roll-back archived version.

So the option was to either install a Java environment and try to learn how Jenkins works and fix it, or to wait for the community to fix the problem.  I opted to do the latter, because it surely would break other people and would get straightened out quickly, right?  Hmm, not so much it seems.  I found a reported bug and asked for a time estimate.  I waited a few days.  No fix.  I left the office for a week of “unplugged” vacation and came back.. No fix.  I then learned that you can access the nightly builds for the plug ins themselves (which is actually pretty cool) so I tried manually installing the latest builds of the plug-in.  Turns out it was still broken.

While I was trying to figure out what was broken, I also appear to have broken something in the Git workspace on the server too, so it was hard to tell if the plug-in was failing, or if Git was confused.  I know that I was confused.  So today I decided that I really needed to get this stuff working again.  I changed the Job to no longer use source control, but instead to just run Window batch files.

REM make sure nothing is hidden 
attrib -H /S
REM recursively remove child folders 
for /d %%X in (*.*) do rd /s /q "%%X"
REM delete files in root folder 
del /q /f *
REM get the source 
git init 
git clone https://[username]:[password]@opennetcf.visualstudio.com/DefaultCollection/_git/SolutionFamily
REM check out
git checkout master
git add ./Common/SolutionFamily.VersionInfo.cs
REM increment the build number
powershell -File "%WORKSPACE%\Utility\SetFamilyVersion.ps1" 2.1.%BUILD_NUMBER%.0
REM commit the change
git commit -a -m auto-version
git push https://[username]:[password]@opennetcf.visualstudio.com/DefaultCollection/_git/SolutionFamily

Once that was done, the MSBUILD plug-in was then able to build from the workspace, though the source code directory had changed one level as compared to where the Git plug-in had been pulling code.  If I had wanted to, I could have had my command do the build as well and not even used the MSBUILD plug in by adding this to the end:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe "/p:Configuration=Debug;Platform=Any CPU" /m "%WORKSPACE%\SolutionFamily\SolutionEngine.FFx.sln" && exit %%ERRORLEVEL%%

Once the Git plug-in is actually fixed, I’ll post how to use it to connect to Visual Studio Online.  It actually seems to be working “somewhat” this morning.  I say “somewhat” because while it actually is pulling the code and behaving properly, when you do the configuration you get an error, which makes it look like it’s going to fail.  Until that’s ironed out I’m going to wait.

Lots of ORM Updates

We use the OpenNETCF ORM in the Solution Family products.  Unfortunately I haven’t figured out a good way to keep the code base for the ORM stuff we use in Solution Family in sync with the public code base on CodePlex, so occasionally I have to go in and use Araxis Merge to push changes into the public tree, then check them into the public source control server.  What that means to you is that you’re often working with stale code.  Sorry, that’s just how the cookie crumbles, and until I figure out how to clone myself Multiplicity-style, it’s not likely to change.

At any rate, we’re pretty stable on the Solution Family side of things, so I did a large merge back into the public tree this evening.  I still have to do a full release package, but the code is at least up to date as of change set 104901 and all of the projects (at least I hope) properly build.

Most of the changes revolve around work I’ve been doing with the Dream Factory cloud implementation, so there are lots of changes there, but I also have been doing more with DynamicEntities, so some changes were required for that too.  Of course there are assorted bug fixes as well, most of them in the SQLite implementation.  I leave it to you and your own diff skills if you really, really want to know what they are.

Go get it.  Use it.  And for Pete’s sake, quit writing SQL statements!

HOWTO: Add the Win32 file version to your .NET Compact Framework assemblies

[NOTE: This is an old post from November 15, 2004 by Neil Cowburn that is hit fairly frequently and that I've recovered using the Wayback Machine]

Currently, this is only one supported method of setting the Win32 file version of your .NET Compact Framework assemblies. This is by command-line compiling your project using the “/win32res” switch with csc.exe and a Win32 resource file. This is definitely not an optimal solution if you are not familiar with command-line compiling .NET CF apps.

In the .NET Framework, those lucky developers are able to set the Win32 file version using a special attribute in the AssemblyInfo file. However this attribute, System.Reflection.AssemblyFileVersionAttribute, is missing from the .NET Compact Framework. How can we fix this so that we can easily set the Win32 file version? Easy! Add the following code to your project:

using System;
namespace System.Reflection
{
    [AttributeUsage(AttributeTargets.Assembly, AllowMultiple=false)]
    public class AssemblyFileVersionAttribute : Attribute
    {
        private string version;
        public string Version
        {
            get { return version; }
        }
        public AssemblyFileVersionAttribute(string version)
        {
            if(version == null)
            {
                throw new ArgumentNullException("version");
            }
            this.version = version;
        }
    }
}

And then, in your AssemblyInfo file, add the following attribute:

[C#]

[assembly: AssemblyFileVersion("1.0.0")]

 

[VB]

<Assembly: AssemblyFileVersion("1.0.0")>

 

Compile your project and then check out its property page using Windows Explorer. You should see that the File Version information has been successfully added to your assembly.

Developing Compact Framework App in Visual Studio 2013

A friend, colleague and fellow MVP, Pete Vickers, brought an interesting product to my attention this weekend.  iFactr has a Compact Framework plug-in for Studio 2013.  I’ve not tried the plug-in, so this isn’t an endorsement just a bit of information.  I also don’t know how they’re pulling it off.  It looks like they have WinMo 6.5 and emulator support, and it requires an MSDN subscription.  I suspect that it requires you to install Studio 2008 so you get the compilers, emulators and all of that goodness on your development system, and it then hooks into those pieces from Studio 2013.

It most certainly is not adding any new language features – you’re still going to be targeting CF 3.5 in all its glory – but the ability to use a newer toolset is a welcome addition.  If they somehow are pulling it off without requiring Visual Studio 2008 that will be really nice.  If you’ve tried the plug-in, let me know how it went in the comments.