New ORM Release: v1.0.14007

I’ve finally gotten around to wrapping up all of the changes I’ve made in the last year (has it really been that long since the last release?) to the OpenNETCF ORM library.  The changes have always been availble in the change set browser, but I actually have them as binary and source downloads now.  I probably should find the time to create a NuGet package for it (and IoC) now.

Using Jenkins to Build from Git without the Git plug-in

A few months ago we decided to upgrade our source control provider and moved everything over to Visual Studio Online.  It’s been working great for source control, though getting used to using Git instead of the TFS source control is a bit of work.  For a few reasons we’re not using the build features of Visual Studi Online, but instead are using a Jenkins build server.  Jenkins is really nice and can do just about anything you could want, which is a big plus.  The only down side is that it’s all in Java.  Why is that a down side, you may wonder?  Well if things get broken, you’re in a pickle.

We were running all well and good for over a month.  Nightly builds were running.  Version umbers were auto-incrementing. Releases for Windows and Mono were getting auto-generated and getting FTPed up to the public release directory.  Things were great.  Until about a week before Christmas, when an update for the git plug-in was released.  The Git plug-in is what allows you to configure Jenkins to easily connect to a Git server and pull your source code.  Well the plug-in update broke the ability to get code from a Git server on Windows.  Now Jenkins has a rollback feature, and had I understood what the failure actually was (it wasn’t obvious that it was a plug-in failure) then I could have rolled back and probably been fine.  But I didn’t know.  And in my effort to “fix” things, I wiped out the roll-back archived version.

So the option was to either install a Java environment and try to learn how Jenkins works and fix it, or to wait for the community to fix the problem.  I opted to do the latter, because it surely would break other people and would get straightened out quickly, right?  Hmm, not so much it seems.  I found a reported bug and asked for a time estimate.  I waited a few days.  No fix.  I left the office for a week of “unplugged” vacation and came back.. No fix.  I then learned that you can access the nightly builds for the plug ins themselves (which is actually pretty cool) so I tried manually installing the latest builds of the plug-in.  Turns out it was still broken.

While I was trying to figure out what was broken, I also appear to have broken something in the Git workspace on the server too, so it was hard to tell if the plug-in was failing, or if Git was confused.  I know that I was confused.  So today I decided that I really needed to get this stuff working again.  I changed the Job to no longer use source control, but instead to just run Window batch files.

REM make sure nothing is hidden 
attrib -H /S
REM recursively remove child folders 
for /d %%X in (*.*) do rd /s /q "%%X"
REM delete files in root folder 
del /q /f *
REM get the source 
git init 
git clone https://[username]:[password]@opennetcf.visualstudio.com/DefaultCollection/_git/SolutionFamily
REM check out
git checkout master
git add ./Common/SolutionFamily.VersionInfo.cs
REM increment the build number
powershell -File "%WORKSPACE%\Utility\SetFamilyVersion.ps1" 2.1.%BUILD_NUMBER%.0
REM commit the change
git commit -a -m auto-version
git push https://[username]:[password]@opennetcf.visualstudio.com/DefaultCollection/_git/SolutionFamily

Once that was done, the MSBUILD plug-in was then able to build from the workspace, though the source code directory had changed one level as compared to where the Git plug-in had been pulling code.  If I had wanted to, I could have had my command do the build as well and not even used the MSBUILD plug in by adding this to the end:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe "/p:Configuration=Debug;Platform=Any CPU" /m "%WORKSPACE%\SolutionFamily\SolutionEngine.FFx.sln" && exit %%ERRORLEVEL%%

Once the Git plug-in is actually fixed, I’ll post how to use it to connect to Visual Studio Online.  It actually seems to be working “somewhat” this morning.  I say “somewhat” because while it actually is pulling the code and behaving properly, when you do the configuration you get an error, which makes it look like it’s going to fail.  Until that’s ironed out I’m going to wait.

Lots of ORM Updates

We use the OpenNETCF ORM in the Solution Family products.  Unfortunately I haven’t figured out a good way to keep the code base for the ORM stuff we use in Solution Family in sync with the public code base on CodePlex, so occasionally I have to go in and use Araxis Merge to push changes into the public tree, then check them into the public source control server.  What that means to you is that you’re often working with stale code.  Sorry, that’s just how the cookie crumbles, and until I figure out how to clone myself Multiplicity-style, it’s not likely to change.

At any rate, we’re pretty stable on the Solution Family side of things, so I did a large merge back into the public tree this evening.  I still have to do a full release package, but the code is at least up to date as of change set 104901 and all of the projects (at least I hope) properly build.

Most of the changes revolve around work I’ve been doing with the Dream Factory cloud implementation, so there are lots of changes there, but I also have been doing more with DynamicEntities, so some changes were required for that too.  Of course there are assorted bug fixes as well, most of them in the SQLite implementation.  I leave it to you and your own diff skills if you really, really want to know what they are.

Go get it.  Use it.  And for Pete’s sake, quit writing SQL statements!

HOWTO: Add the Win32 file version to your .NET Compact Framework assemblies

[NOTE: This is an old post from November 15, 2004 by Neil Cowburn that is hit fairly frequently and that I've recovered using the Wayback Machine]

Currently, this is only one supported method of setting the Win32 file version of your .NET Compact Framework assemblies. This is by command-line compiling your project using the “/win32res” switch with csc.exe and a Win32 resource file. This is definitely not an optimal solution if you are not familiar with command-line compiling .NET CF apps.

In the .NET Framework, those lucky developers are able to set the Win32 file version using a special attribute in the AssemblyInfo file. However this attribute, System.Reflection.AssemblyFileVersionAttribute, is missing from the .NET Compact Framework. How can we fix this so that we can easily set the Win32 file version? Easy! Add the following code to your project:

using System;
namespace System.Reflection
{
    [AttributeUsage(AttributeTargets.Assembly, AllowMultiple=false)]
    public class AssemblyFileVersionAttribute : Attribute
    {
        private string version;
        public string Version
        {
            get { return version; }
        }
        public AssemblyFileVersionAttribute(string version)
        {
            if(version == null)
            {
                throw new ArgumentNullException("version");
            }
            this.version = version;
        }
    }
}

And then, in your AssemblyInfo file, add the following attribute:

[C#]

[assembly: AssemblyFileVersion("1.0.0")]

 

[VB]

<Assembly: AssemblyFileVersion("1.0.0")>

 

Compile your project and then check out its property page using Windows Explorer. You should see that the File Version information has been successfully added to your assembly.

Developing Compact Framework App in Visual Studio 2013

A friend, colleague and fellow MVP, Pete Vickers, brought an interesting product to my attention this weekend.  iFactr has a Compact Framework plug-in for Studio 2013.  I’ve not tried the plug-in, so this isn’t an endorsement just a bit of information.  I also don’t know how they’re pulling it off.  It looks like they have WinMo 6.5 and emulator support, and it requires an MSDN subscription.  I suspect that it requires you to install Studio 2008 so you get the compilers, emulators and all of that goodness on your development system, and it then hooks into those pieces from Studio 2013.

It most certainly is not adding any new language features – you’re still going to be targeting CF 3.5 in all its glory – but the ability to use a newer toolset is a welcome addition.  If they somehow are pulling it off without requiring Visual Studio 2008 that will be really nice.  If you’ve tried the plug-in, let me know how it went in the comments.

Windows CE on Arduino?

If you do much “maker” stuff, you’re probably aware of the Netduino, an Arduino-compatible board that runs the .NET Micro Framework.  Cool stuff and it allows you to run C# code on a low-cost device that could replace a lot of microcontroller solutions out there.

It just came to my attention today that there’s a new game in town – 86duino, an Arduino-compatible x86 board.  Say what?!  Basically we have an Arduino-size, and Arduino-cost ($39 quantity-1 retail price, hell0!) device that can run any OS that runs on x86.  Let’s see, an OS that runs on x86, does well in a headless environment, runs managed code, can be real-time, has a small footprint and low resource utilization?  How about Windows CE?  There’s no BSP for it yet that I see, but it’s x86, so the CEPC BSP is probably most of what you need for bring-up.

I’ll be looking to build up a managed code library to access all of the I/O on this and some popular shields.  Any requests/thoughts on “must-have” shield support?

Building a Mono Solution with Jenkins

I recently switched our entire build system over to using a Jenkins build server.  It’s been running for a couple weeks now and I keep expanding the jobs I have it doing, and all in all I’m very happy with how it’s going.  It’s certainly saved me a load of time and the fact we now are getting automated nightly builds of all of our installers is extremely valuable to us and our customers.

Once of the challenges in getting things working was getting the build of the Linux installer for Solution Engine automated.  The Jenkins Server is a Windows Server and Solution Engine in built using Mono, but the actual deployment package is a tarball.  Generating tarballs on a Windows platform really isn’t well documented or outlined anywhere that I could find, but I was able piece the process together from some help files, man pages and a lot of iterations.

In the end, the build portion of the job is done through four separate “Execute Windows Batch Command” steps.

Step 1: Compile the Solution

This one was pretty straightforward.  You simply use the xbuild.exe application that ships with Mono and point it at your Visual Studio/Xamarin Solution File:

"C:\Program Files (x86)\Mono-3.2.3\lib\mono\4.5\xbuild.exe" /p:Configuration=Debug SolutionEngine.Mono.sln

Step 2: Copy the results

xbuild puts the files into the output structure I want (that’s how the Visual Studio solution is architected) but it also generates a lot of cruft.  I use robocopy, which is already part of Server, to copy all of my files except files with a *.pdb or *.mdb extension to a temporary output location:

robocopy Publish\SolutionEngine\Debug\Mono\ Installers\output\ /s /xf *.pdb *.mdb

Step 3: Put the results into a tarball

Next I use 7-zip to build the tarball.  This part was surprisingly confusing.  7-zip has some documentation, but it’s far from clear, and I even found a page that had “command line examples” but I’m not sure that it really gave me any more clarity.  In the end I just did lots and lots of iterations on the build, checking the output every time to see what happened.  In the end, this is the command I ended up with:

"C:\Program Files (x86)\7-Zip\7z.exe" a -ttar SolutionEngine.tar "%WORKSPACE%\Installers\output\*.*" -mmt -r

This breaks down as follows:

  • The ‘a’ flag means I’m creating (as opposed to extracting from) an archive
  • -ttar is a switch meaning “type tar” – I’m using a tar container (as opposed to say a zip, iso or whatever)
  • SolutionEngine.tar is the output file.  It ends up in the working folder where I’m running command.
  • The next section is the “source”.  %WORKSPACE% is an environment variable Jenkins sets and the rest of the path you can see is the destination from Step 2 above.  *.* means I want everything found at the source location in the tar.
  • -mmt means “use multi-threading when compressing” which I think ends up using multiple cores if available.  I didn’t compare times with and without, so I don’t know how much it helps and it’s probably negligible – my end tar is only about 9MB.  Having the switch doesn’t cause problems, so I’m leaving it in.
  • The -r flag means “recurse the source.”  My source folder has several layers of subdirectories and I want them all in the tarball, maintaining the folder structure.  This flag achieves that.

Step 4: Compress the tarball

For those unaware, a “tar.gz” file is a double operation – package, then compress the package.  Step 3 built the package, but to be friendly to both my upload process and our customers who have to download it, I also compress the file.  To compress the tarball with gzip compression I again use 7-zip.

"C:\Program Files (x86)\7-Zip\7z.exe" a -tgzip SolutionEngine.tar.gz "%WORKSPACE%\SolutionEngine.tar"

This one is simpler than Step 3 and breaks down as follows:

  • The ‘a’ flag, again, means I’m creating an archive
  • -tgzip is a switch meaning “compress using type gzip”
  • SolutionEngine.tar.gz is the output file.  Again, it ends up in the working folder where I’m running command.
  • The next section is the “source” file, which was the tar file output by Step 3 above.

New Blog for IoT/Intelligent Systems Topics

I’ve created a new blog page specifically for information on Intelligent Systems, IoT and M2M topics.  This blog will continue to have the same type of content I’ve typically posted: developer-centric information covering cross-platform and embedded devices for the .NET developer, but topics that are specifically looking at IoT and our Solution Family products will now be over on our sister blog.

Our New Cross-Platform Build, Test, Store and Deploy Architecture

First, let me preface this by saying no, I’ve not migrated any Compact Framework application to Visual Studio 2013. We’re still using Visual studio 2008 for CF apps, so don’t get too excited. That said, we’ve done some pretty interesting work over the last week that was interesting so please, read further.

Microsoft recently announced the availability of not just Visual Studio 2013, but also Visual Studio Online, which is effectively a hosted version of Team Foundation Server 2013.  We use the older TFS 2010 internally as our source control provider as well as for running unit tests, but it’s got some significant limitations for our use case.

The biggest problem is that our flagship Solution Engine product runs on a lot of platforms – Windows CE, Windows Desktop and several Linux variants.  For Linux we’re using Mono as the runtime, which means we’re using XBuild to compile and Xamarin Studio for code editing and debugging.  Well Mono, XBuild and Xamarin Studio don’t really play well with TFS 2010.  To put it bluntly, it’s a pain in the ass using them together.  You have to train yourself to have Visual Studio and Xamarin Studio open side by side and to absolutely always do code edits in Visual Studio so the file gets checked out, but do the debugging in Xamarin Studio.  Needless to say, we lose a lot of time dealing with conflicts, missing files, missing edits and the like when we go to do builds.

TFS 2013 supports not just the original TFS SCC, it also supports Git as an SCC, which is huge, since Xamarin Studio also supports Git. The thinking was that this would solve this cross-platform source control problem, so even if everything else stayed the same, we’d end up net positive.

I decided that if we were going to move to TFS 2013, we might as well look at having is hosted by Microsoft at the same time.  The server we’re running TFS 2010 on is pretty old, and to be honest I hate doing server maintenance.  I loathe it.  I don’t want to deal with getting networking set up.  I don’t like doing Hyper-V machines.  I don’t like dealing with updates, potential outages and all the other crap associated with having a physical server.  Even worse, that server isn’t physically where I am (all of the other servers we have are) so I have to deal with all of that remotely.  So I figured I’d solve problem #2 at the same time by moving to the hosted version of TFS 2013.

Of course I like challenges, and Solution Engine is a mission-critical product for us.  We have to be able to deliver updates routinely. It’s effectively a continuously updated application – features are constantly rolling into the product instead of defined periodic releases with a set number of features.  We’ll add bits and pieces of a feature incrementally over weeks to allow us to get feedback from end users and to allow feature depth to grow organically based on actual usage.  What this means is that the move had to happen pretty seamlessly – we can’t tolerate more than probably 2 or 3 days of down time.  So how did I handle that?  Well, by adding more requirements to my plate, of course!

If I was going to stop putting my attention toward architecting and developing features and shift to our build and deployment system, I decided it was an excellent opportunity to implement some other things I’ve wanted to do.  So my list of goals started to grow:

  1. Move all projects (that are migratable) to Visual Studio 2013
  2. Move source control to Visual Studio Online
  3. Abandon our current InstallAware installer and move to NSIS, which meant:
    1. Learn more about NSIS than just how to spell it
    2. Split each product into separate installers with selectable features
  4. Automate the generation of a tarball for installation on Linux
  5. Automate FTPing all of the installers to a public FTP
  6. Setting up that FTP server
  7. Setting up a nightly build for each product on each platform that would also do the installers and the FTP actions
  8. Setting up a Continuous Integration build for each product on each platform with build break notifications

Once I had my list, I started looking at the hosted TFS features and what it could do to help me get some of the rest of the items on my list done.  Well it turns out that it does have a Build service and a Test service, so it could do the CI builds for me – well the non Mono CI builds anyway.  The nightly builds could be done, but no installer and FTP actions would be happening.  And it looked like I was only going to get 60 minutes of build time per month for free.  Considering that a build of just Engine and Builder for Windows takes roughly 6 minutes, and I wanted to do it nightly meant that I probably needed to think outside the box.

I did a little research and ended up installing Jenkins on a local server here in my office (yes, I was trying to get away from a server and ended up just trading our SCC server for a Build server).  The benefit here is that I’ve now got it configured to pull code for each product as check-ins happen and then to do CI builds to check for regression.  If a check-in breaks any platform, everyone gets an email.  So if a Mono change breaks a CF change, we know.  If a CF change breaks the desktop build we know.  That’s a powerful feature that we didn’t have before.

Jenkins also does out nightly builds, compiles the NSIS installers and builds the Linux tarballs.  It FTPs them to our web site so a new installation is available to us or customers every morning just like clockwork, and it emails us if there’s a problem.

It was not simple or straightforward to set all of this up – it was actually a substantial learning curve for me on a whole lot of disparate technologies.  But it’s working and working well, and only took about 6 days to get going.  We had a manual workaround for being able to generate builds after only 2 days, so there was no customer impact.  The system isn’t yet “complete” – I still have some more Jobs I want to put into Jenkins, and I need to do some other housekeeping like getting build numbers auto-incrementing and showing up in the installers but it mostly detail work that’s left.  All of the infrastructure is set up and rolling.  I plan to document some of the Jenkins work here in the next few days, since it’s not straightforward, especially if you’re not familiar with Git or Jenkins, plus I found a couple bugs along the path that you have to work around.

In the end, though, what we ended up with for an extremely versatile cross-platform infrastructure.  I’m really liking the power and flexibility it has already brought to our process, and I’ve already got a lot of ideas for additions to it.  If you’re looking to set up something similar, here’s the checklist of what I ended up with (hopefully I’m not missing anything).

Developer workstations with:

  • Visual Studio 2013 for developing Windows Desktop and Server editions of apps
  • Xamarin Studio for developing Linux, Android and iOS editions
  • Visual Studio 2008 for developing Compact Framework editions

A server with:

  • Visual Studio 2013 and 2008 (trying to get msbuild and mstest running without Studio proved too frustrating inside my time constraints)
  • Mono 3.3
  • 7-Zip (for making the Linux tarballs)
  • NSIS (for making the desktop installers)
  • Jenkins with the following plug-ins
    • msbuild
    • mstest
    • git
    • ftp

Getting Mouse Events in a Compact Framework TextBox

Yesterday I got a support request for the Smart Device Framework.  The user was after a seemingly simple behavior – they wanted to get a Click or MouseDown event for a TextBox is their Compact Framework application so they could select the full contents of the TextBox when a user tapped on it.

Of course on the desktop this is pretty simple, you’d just add a handler to the Click event and party on. Well, of course the Compact Framework can’t be that simple. The TextBox has no Click, MouseUp or MouseDown events. Thanks CF team. There are some published workarounds – one on CodeProject and one on MSDN – but they involve subclassing the underlying native control to get the WM_LBUTTONDOWN and WM_LBUTTONUP messages, and honestly that’s just not all that fun. Nothing like making your code look like C++ to kill readability.

For whatever reason (I can’t give a good one offhand) the TextBox2 in the Smart Device Framework also doesn’t give any of the mouse events, *but* it does offer a real easy way to add them since it does allow simply overriding of the WndProc. Basically you just have to create a new derived control like this:

    public class ClickableTextBox : TextBox2
    {
        public event EventHandler MouseDown;

        protected override void WndProc(ref Microsoft.WindowsCE.Forms.Message m)
        {
            base.WndProc(ref m);

            switch ((WM)m.Msg)
            {
                // do this *after* the base so it can do the focus, etc. for us
                case WM.LBUTTONDOWN:
                    var handler = MouseDown;
                    if (handler != null) handler(this, EventArgs.Empty);
                    break;
            }
        }
    }

And then using it becomes as simple as this:

    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();

            textBox2.Text = "lorem ipsum";
            textBox2.MouseDown += new EventHandler(textBox2_MouseDown);
        }

        void textBox2_MouseDown(object sender, EventArgs e)
        {
            textBox2.SelectAll();
        }
    }