Wednesday, December 23, 2009

Dynamics SL....

I'm currently stretched between two projects for work. One a rather long-term system to improve the efficiency of document creation for a local loan company and the other is to assist with the customizations inherent in a Microsoft Dynamics SL (Solomon) migration from version 6.5SP1 to 7.0FP1.

Now, it's that second one that I need to talk about right now, because there's something about this that I'm floored by. The fact that there seems to be absolutely no one talking about the problems/processes/and what not that they've stumbled across while dealing with Solomon.

For example, the system is very easy to customize via Visual Basic for Applications (VBA) and is in fact designed from the ground up for that purpose. You're able to move controls, add new ones, and even go so far as to add entirely new data structures to the database and gaining read/write operations against it.

But, then you have to wonder, where on earth these changes are stored, and the answered to that is within a database field.

Now, they shove the operations as binary data into a binary field, so looking just at the data structures one would think that there was 1 entry in this table per modification. One would think wrong, as the system would have no way of knowing what modifications would be applicable when or why or how. In fact that would be a nightmare. No, what's happening, and why there will multiple entries per screen into the customization table is that the system only stores a maximum of 30KB per record. So basically all the VBA is changed into a binary-encrypted string, and then chunked up and stored that way.

Also, there's no EASY way to identify which controls were added to the form, as it's up to the user to define the naming convention. This can lead to things being named in the same pattern as Microsoft uses to name its controls as well as things being called "Button1."

But, if you export the customization for that screen, then you'll see a list of changes that have been made against all the controls on the screen, and from that, one is able to look and find those controls that have the "Created=True" property. It is those controls that are created by the end-user via the customization screens.

That knowledge would have saved me days (as I was having search and compare the non-customized version of the screen to the customized version in order to document all the various customizations so that we could ensure they successfully made the migration), but it was no where to be found. At least out in the open where one can find it.

In fact, I gained the "Tumbleweed" badge at StackOverflow because a question I asked (which the above knowledge would have answered for me) received low views and no responses for a decent amount of time.

But hey, I'm here to spread the cheer I guess....

Wednesday, November 18, 2009

IIS7 and the Case of the Reserved Reports

I had fun this morning at work hunting down something highly interesting in regards to IIS7.

What was happening, is that a client had kind of.... deleted the virtual server on which a web application resided. So, it was getting redeployed to a Windows 2008 box that also performed as a SQL SERVER 2008 box.

Well, my boss deployed the application, and when he went to test, it all worked, but the reports. Every time he attempted to run a report, the only thing that came back was a "Service Unavailable" message.

He upgraded the application from .NET 1.1 to 3.5, he played with the configuration values, and a host of other things up to and including special logging just for the 503 error hiding behidn the "Service Unavailable" message.

Nothing worked. Every other piece of the application was happy, but any of the reports.

So, we get together and start trouble shooting.

First thing we do, is create a blank ASPX file that just implements a report viewer, and it still comes back as "Service Unavailable."

Well, we add another new blank ASPX file, thsi time with out even the report viewer, and it still comes back as "Service Unavailable."

Then we drop in a old-fashioned HTML file, and get... the exact same results.

One is starting to get frustrated at this point.

So, we create a new folder, and try the HTML file in it. And we get served the HTML file.

So, we copy one of the existing reports into the new folder... and it works.

Confused, we discuss things, and I had the brilliant idea that maybe, the system was expecting the RDLC files to be located in a different folder than the ASPX files that display them. After all, copying over the one report caused it to work, and that ASPX file points to the path of the RDLC file which it is designed to load.

So, we create a new RDLC folder, move the RDLC files over to it, update all the aspx files to point to the new RDLC folder, publish, test and....

get "Service Unavailable."

So, we back out of all of those changes, republish, and are still getting "Service Unavailable."

I have him rename the folder in Visual Studio and republish and...

it works.

So, we're wondering if it was just a bad compile or something. So we rename the folder back to REPORTS and republish, and upon testing get...

"Service Unavailable."

One can imagine the head banging upon wood desks that began at this moment.

Then a horrible, horrible idea came upon me: what if "reports" was a reserved word in IIS7.

Almost fearful of the results, we just change the folder name on the server from "reports" to "reporting" and it works.

One is flabbergasted.

So, we rename the folder in Visual Studio and then update all the links to the reports and republish, and it's happy as a lark. It works, spits out the report and as far as it is concerned, life is good.

Of course, neither myself nor my boss could believe that Microsoft would make the word REPORTS a reserved keyword for IIS.

The very thought is insane, but that appeared to be what was happening.

So, of course, this demanded some Google time.

Ultimately, this link was found, and it explained the reason why the REPORTS folder was giving us a 503 error.

It seems that when SQL SERVER 2008 is installed with REPORTING SERVICES, the system registered the REPORTS name with the HTTP.SYS process, which means that the request for the REPORTS folder was never actually making it to IIS.

Which explained why the ASPX page wasn't being served, and why the standard HTML file wasn't being served, in fact it explained just about everything.

The only thing it didn't explain is why on earth Microsoft would make this decision!

Thursday, September 24, 2009

Keeping Track Of Non-Modal Dialog Windows

I had a problem.

My problem revolved around a modal dialog window. One which the user base needed to be non-modal.

Now, initially, that sounds like an easy task. It's just a matter of changing the ShowDialog method to a simple Show.

But that's just initially.

The problem comes because this dialog was editing the elements of a list. So, for each page viewed, I could have 1-to-N dialogs, spread out over 1-to-N pages. And I couldn't re-use the dialog windows over and over again.

Of course, even this is not THAT big of an issue--at least until you realize that there is a situation where I do need to re-use a dialog window. Basically whenever a user clicks on an already displayed line item, I need to bring that dialog to the front as opposed to popping a new one up.

And this had to be done without having a handle on the forms, and the control which is spawning these dialogs (and its parents) are not set up as MDI parents.

So, what the provides is the following list of issues:

  • Need to be able to launch N dialogs
  • Those N Dialogs cannot be re-used across list items
  • If a list item is already loaded in a dialog, that dialog just needs to be granted focus
  • No MDI Interface & no time to implement a traditional MDI interface
So, I turned to my trusty friend GOOGLE, and found pretty much zip. Apparently, most non-modal dialogs are either a singleton type thing or they're spawned and forgotten. Which for the most part, that's the case. I can see that the constraints that I'm working under are not common.

So, after thinking some more, I create a new project in Visual Studio to allow me to play with things. I tried grabbing the HWND object for my dialog but that was obscenely bulky and prone to errors.

Then I had the grand idea of keeping a list of these dialog windows in my control. That way I would have a handle for all of them.

So I tried it. I launched three different list item elements, all of them popped up. When I went back to the main form, and clicked one of those list item elements again, the correct dialog gained focus. I was scrubbing my hands and getting ready to do that "I control the world and am a genius" cackle.

But I had one last test, which is I closed one of the dialog and then clicked its corresponding list item.

And got nothing. Zilch.

After a quick breakpoint add, I discovered that those closed dialogs still existed in my list.

So, I proceed to bang my head on my desk.

Then I went and retreived my boss, and presented the problem to him, hoping he'd have some brilliant idea. Which he failed to grasp the problem until I reminded him that whole "there's no MDI" thing, at which point he got this kind of sick/amazed/amused look and basically told me I should have fun trying to solve this one.

Which is when I remembered events.

Oh those beautiful, wonderful things called events.

What had slipped my mind is that an event is raised to the window and then sent down the control chain until it finds a control that basically says "I'm supposed to handle that!"

So I go into my code, and to the CLOSING event of the dialog I add a new event that I raised. One that is consumed by my calling class.

When I handle said event, I know that I can remove the associated dialog from my list of open dialogs.

And wouldn't you know it, it all works just about perfectly.

Thursday, September 10, 2009

CAPTCHA Usability

Here's a secret, I despise CAPTCHA's. I hate them. I think they are one of the most user-unfriendly form elements in existence.

Most of the time, I can barely read them.

I understand their purpose, and I know why they exist. But that doesn't mean that I actually like the things, and it definitely doesn't mean that I like entering data into them.

But then there's the ones that I find are actively HOSTILE to the end user.

After all, the usage pattern for these evil things is fairly standard, everyone knows what to do:

  1. Squint and try and make out the relevant letters
  2. Type those letters in as they are displayed
  3. Press the submit icon/button/dongle/blood letting device
This process has become ingrained. It's what we expect.

Which is where I get to that hostile statement above.

I found one of these captchas, and entered the data, and it came back invalid.

So I entered it again, with the same results.

And again, and again.

Finally, I was starting to want to smack the website's author when I noticed something.

Instructions.

Who puts instructions on a captcha I thought to myself as I began to read.

For posterity, I made a screen shot. And in case you were wondering, this is the context of that block of text:
Confirmation code READ THIS INSTRUCTION CAREFULLY: *
Type a lower-case "w" into the box and then follow it with the code exactly as you see it. The code is case sensitive and zero has a diagonal line through it. The initial "w" is necessary - the system will reject your registration if you do not include it
Yes, they REQUIRE you to type a lower-case "w" as the first letter of the field.

Why? As far as I know it's just to annoy anyone trying to register for their site.

My ultimate thought: poor, poor design.

Wednesday, September 9, 2009

More Patent Shenanigans...

Apparently, a company has patented cross-communication between applets of an application utilizing a shared data store with both relational and objective data.

What that means is that someone patented the concept of including those Apps in Facebook and Twitter and any other application which uses plug-ins (including FireFox and Visual Studio) and a data store and a global network.

What is sad is that they've convinced a judge that there is reason to force Facebook to provide them the ability to review Facebook's source code.

Yes, a United States Federal judge has told a company that they must divulge trade, copyrighted secrets to a competitor in order for that competitor to successfully sue them.

All because Leader Technologies patented the concept of plug-in software.

I'm sickened by this, and see it as yet another reason why software patents need to disappear forever.

Of course if I was Facebook, I would facilitate the review of my source code by providing it in hard copy format.

After all, Facebook wouldn't want to step on anyone's toes by possibly using infringing software that features such niceties as the "Find" function or source code formatting.

Tuesday, September 8, 2009

Software Patents Are Insane

I'm a Software Engineer. I design and build software systems. It's what I do. Even when I'm off work, I design and build software systems.

And I hate software patents.

I firmly believe that they are one of (if not THE) worst idea that has ever been inflicted upon my industry and craft.

Case in point, have you ever typed into the address bar of your web-browser and gone to a website without having to type in the entire domain name? For example, I'll type in "Facebook" and be taken to http://www.facebook.com.

It's a reasonable action on the part of the browser and it makes sense. Additionally, it's functionality that's been around for many, many years now.

Which of course means that it is now patented.

TechCrunch has an article about an Israeli company that has received a new patent for that behavior (and for the "I'm Feeling Lucky" button that's been on Google's homepage for probably a decade now).

What this means is that browsers and search engines are either going to have to change the functionality that people expect and use, or pay a hefty sum in order to keep the functionality.

This is insane.

As a recent CATO article described, imagine if a certain type of plot twist was patented. Imagine, never again being able to read about someone suddenly being alive again. Never getting to see that type of literary device used in movies, comics, books or television shows. Unless the company using it paid someone a hefty dollop of dollars.

Sounds insane doesn't it?

Yet, we're perfectly accepting of the insanity in the software field?

No. Or at least, I think it's insane and pray that the upcoming Supreme Court case overturns the ability to patent software.

After all, we all need to remember that software systems are not inventions.

They are copyrightable works of art.

Friday, September 4, 2009

Duplicate Image Resources in WPF

I'm currently in the final stages of a large project. We're talking hundreds upon hundreds of man-hours, thousands upon thousands of lines of code, and even a good hundred or so image resources.

All of this, spread over 15 projects. These projects did everything from standard data access (to the local data store) to administration data access (via the central data repository) to installation and installation custom actions, in addition to the core and supporting functionality of the application itself.

It was a standard system, I had the controlling application, and a bunch of supporting, functionally-grouped class libraries.

The thing is that I built this thing working in those functional groups, and it was rare for a library to talk to another library directly. Sure, I had a library with shared functional classes and then the data layer, but in general, those units worked by themselves, and I built them separately from one another.

Which means that I had a lot of images duplicated across the project.

In fact, the Installer (a MSI file) was 15,283KB.

Which isn't that big considering how large this application is, but it's still a big file considering our distribution method is the internet.

So, I started looking at all those images, and realized something.

I didn't need them all.

Because I had a library that was built before all the other libraries, and more importantly was built as an attached library to them all, I could move all those images to that library, meaning it would only be a single place for me to call them from.

By the time I had finished moving most of the images into a single DLL (there were some images in the Reporting section that I had to leave in that library) I had the MSI's size down to 11,786KB.

And the coding involved is simple. There are in fact two ways to approach this.

The first is to modify the Image's SOURCE attribute. The usual way to do this is to use this

Source="/MyDLL;component/Folder/Image.png"
and then modify it to
Source="/MyOtherDll;component/Folder/Image.png"
Not that hard.

The second way is a bit more involved working, but is probably a better solution in the long run, especially where ease of re-usability and standardization is concerned. And that's to define it into a style for your image.

In this, you'd create a new style for an image, providing it little details such as the Source URI and then use that as a DynamicResource in your various DLLS. That would look like this:
< style="{DynamicResource UndoImage}">
But the fundamental thing to remember is that you don't need those images duplicated in every DLL that you're building for your project.

Tuesday, August 25, 2009

Bad UI Decisions From Google

I use Google Docs.

I admit it, I find it easier to use it for SIMPLE documents than MS Word, and it's easier to have constant and consistent access to said documents across multiple computers than using a thumb drive for storage.

Well, I had a work-flow in how I used Google Docs. Mainly, anything I was currently working on would not be in a Folder. As I finished a document, I would move it to star it and then after I dealt with it (published it to a blog or whatever its point was) I would add it to the correct folders, and remove the star.

It was simple, streamlined and above all, it worked.

In fact I choose Google Docs over Zoho Writer BECAUSE of that whole "Items Not In Folders" view and that I could implement that specific work flow.

I knew at a glance what I was working on, what I had queued for posting/publishing, and when I went through the folders they only contained relevant documents to that folder that I've generated and have done what needed to be done to them.

But then in the middle of June, Google broke that.

They removed the "Items Not In Folders" view.

The view that I went to immediately upon entering Google Docs.

The view that I spent 95% of my time in Google Docs actually using (that's not counting time actually spent writing).

I believe that this was a poor UI and a poor usability decision.

And I am not alone.

There are a number of posts in the Google Docs Help forums asking what has happened to this view. The blog post on the Google Docs Blog have a number of people wondering what has happened to it, and when it would be back.

Enough so that at one point, an actual Google product manager signed into the system and posted the following (one Vijay Bangaru):

Thanks to everyone for letting us know how much you miss "Items not in folders." We definitely are aware that you miss this feature and are actively looking into what we can do. For an imperfect substitute in the meantime, you can try finding items not in folders by going to one of the views ("Shared with me", "All items", etc) and looking for items without a labeled folder indicator. You can make the visual scanning easier by assigning colors to your folders. Thank you for your patience and feedback as we transition to a new doclist.
Yes, he provided a "work-around" but that work around is basically the same thing as taking every piece of e-mail you have in Outlook, having never archived any of it, never moved any of it to an individual folder, and then visually filtering the ones without the little red flag icon set on it.

"Imperfect substitute" does not even BEGIN to describe the inherent failure that that process represents.

The thing is that I find the entire situation odd, as I cannot think of a single "programming" reason why that particular view of data had to go away. The database which stores this information should quite easily be able to generate that view, and in fact I would have it AS a View. It should be a highly simple query to find all the items that are not associated with any TAG (or even not associated with a specific tag).

What is sadder is that there are other options that would meet the needs that I have:
  • If I could have a "Default Folder" option where a newly-created or newly-shared-with-me item is automatically placed (until I remove it) then I could continue my normal work flow.

  • If i was in a Folder, and choose to create a new item, and it was automatically assigned to that folder, then even then I could make things work.
But, I can't, and I can't, and ultimately, using Google Docs has become fundamentally unintuitive for me.

It's been a month, and since they obviously have no desire to either repair the functionality or to provide a truly usable work-around; and that means that the only thing I can think of doing is going back to my thumb drive and MS Word.

Which is sad, because I genuinely liked using Google Docs.

Friday, August 7, 2009

Logging In On a Windows Application

Oh the things people will dream up.

Here's the situation: the customer wants the system to easily be able to assign rights as user/administrator to an application, but they do not want the application to automatically assume the identity of the user who is logged into the computer.

Well, after my brain reset at this odd design decision, I got to work, and began thinking.

I knew I had a few things I could take for granted. I knew I had a SQL Server, and I knew that the client had an Active Directory & a Domain Controller. So, far, so good.

The ease of assigning administrators is a blessing with the Active Directory. All I need to do is generate two Global Groups in AD one for users and one for administrators. This makes it easy to pass-through authentication against the SQL Server as well. As all I need to do is create the USER group as a SQL SERVER USER and give it the necessary permissions to run the application.

Additionally, using AD means that there's one less username/password that the user has to commit to memory; and I'm all about having to remember less stuff.

So far, life is good.

Then I stumbled.

And not just any stumble, we're talking an animal into a tar pit level stumbling here.

It was that second part of the requirement, where the user is required to log into the application itself, after they have been logged into the computer.

All right, I thought to myself, that's not a problem. I'll just query AD directly and have it authenticate.

Well, a short discussion with my Sr. Network Engineer/COO corrected that particular opinion. Not only does Microsoft discourage that type of behavior being initiated by an Enterprise-based application, but depending on how AD is configured, it may not even allow an authentication request to be processed.

Additionally, the application itself would need elevated security rights in order to make an authentication request against AD.

He then informed me that SQL Server is the proposed way to utilize AD authentication (not AUTHORIZATION mind you). This goes back to that whole Global Group as a "SQL Server Windows User" thing.

But, after banging my head for a bit, I come to find out that the connection string requires Integrated Security to be TRUE.

Which means it doesn't matter one whit what I put into the username/password textboxes, as authentication occurs based on the Windows Identity Token which is running the application making the request.

Via the Authorization process (where I ask AD if the username provided is in the expected group) would cull the login request if I provided a bad username, but there was no checking at all of the password.

So, I cried.

Like a baby.

Then had lunch.

When I got back, I saw the problem waiting for me, so I once again turned to Google, MSDN and StackOverflow.

Apparently, I'm the only person who has this particular problem.

Enter prodigious amounts of head-banging.

I considered briefly calling the client up, and asking just how certain they are about this whole double log-in scenario. But then remembered that she was on vacation, and thus out-of-contact.

So, as I expanded my searches into wider and wider realms of dealing with C# and impersonation, I found a discussion where one would be able to launch a process as a different user. This was CLOSE to what I was attempting to do. After all, I didn't care what user context the application was running under, just so long as it was running, and whoever logged into the system was authenticated.

So, I delved further into these concepts, diving deeply into the dark nether regions of namespaces such as System.Diagnostics.

And then I started noticing something. All of these designs were accessing a non-managed DLL. A little thing called "advapi32.dll."

Which has this function:

public static extern bool LogonUser(string pszUsername, string pszDomain, string pszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken);
Eureka!!

Well, at least somewhat.

With this I was able to pass off the authentication of the supplied username/password to Windows itself. It would then provide me a boolean value if the logon process was successful or not, and if I ever NEED it, I also have an authentication token as an IntPtr.

Additionally, according to MSDN this will continue to work in Windows 7--so I have a year or 5 in which to convince them that a single log-on is acceptable.

Ah... Data Templates, How I Love Thee....


What? You don't like them? Oh, but you will.... you will....

WPF has a number of different Templates which can be applied to controls, and the DataTemplate is the one that's used for things like ListViews, ListBoxes and ComboBoxes.

These things have great and wonderful powers. I use them a lot--mainly when I'm trying to display complex data via a ListView--complex data such as a history of addresses for an entity. Consider, an address has 4-8 fields of data associated with it: 2 or 3 Street lines, a city, a region, a postal code and possibly country and county data. In addition to the data, I want to have command buttons that allow me to perform EDIT/DELETE operations on that line of data.

Now, in a traditional ListView, all I would have to do is bind to my Address class, and override the ToString function to return the data in a single line, and the listview will display it. For this particular example, I'll use the following as my Data Object:


public class AddressViewModel : INotifyPropertyChanged
{

private string _line1;
private string _line2;
private string _city;
private string _state;
private string _postal;

public string Line1
{
get { return _line1; }
set { _line1 = value; onPropertyChanged("Line1"); }
}
public string Line2
{
get { return _line2; }
set { _line2 = value; onPropertyChanged("Line2"); }
}
public string City
{
get { return _city; }
set { _city = value; onPropertyChanged("City"); }
}
public string State
{
get { return _state; }
set { _state = value; onPropertyChanged("State"); }
}
public string Postal
{
get { return _postal; }
set { _postal = value; onPropertyChanged("Postal"); }
}

public string AddressString
{
get {
string s = "";
s = s + _line1;
s = s + "\n";
s = s + _city + ", " + _state + " " + _postal;
return s;
}
}

public event PropertyChangedEventHandler PropertyChanged;
protected void onPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}


}


The problem there is that it doesn't provide the edit controls that I like having on such complex data types. Plus, there is confusion on what action should be performed when the user selects the item.

So, I build myself the DataTemplate. I define that I want all these elements in a specific order, binding as needed. Additionally, since I have the command buttons in the data template, it's DataContext is automatically an instance of my Data Object, so I have a handle to that specific Data Object via the SENDER parameter of my Click Event.

The DataTemplate XAML is here:

<DataTemplate x:Key="AddressRepeater">
<DockPanel HorizontalAlignment="Stretch" >
<StackPanel Height="4" DockPanel.Dock="Top" HorizontalAlignment="Stretch" >
<Separator BorderBrush="Black"></Separator>
</StackPanel>
<Border BorderBrush="Black" BorderThickness="1" CornerRadius="0.5" >
<StackPanel DockPanel.Dock="Left" HorizontalAlignment="Left"
VerticalAlignment="Top" Orientation="Vertical" >
<Button Name="cmdEdit" Style="{StaticResource ClearAutoSizeButton}"
Height="20" Width="20" MaxHeight="20" MaxWidth="20"
MinHeight="20" MinWidth="20" Click="cmdEdit_Click"
HorizontalAlignment="Center" VerticalAlignment="Top"
ToolTip="Edit This Address" Margin="5,10,5,10">
<Image Style="{DynamicResource EditImage}" />
</Button>
<Button Name="cmdDelete" Style="{StaticResource ClearAutoSizeButton}"
Height="20" Width="20" MaxHeight="20" MaxWidth="20"
MinHeight="20" MinWidth="20" Click="cmdDelete_Click"
HorizontalAlignment="Center" VerticalAlignment="Top"
ToolTip="Delete This Address" Margin="5,0,5,10">
<Image Style="{DynamicResource DeleteImage}" />
</Button>
</StackPanel>
</Border>
<TextBlock Text="{Binding AddressString}" TextWrapping="Wrap" Margin="10,5,0,0" />
</DockPanel>
</DataTemplate>


Of course, that alone will give you a very ragged edge, as the controls won't stretch across the window as expected. Each line will only take up as much room as it needs, no more, no less. Which does not give you a pretty UI. To make that DataTemplate stretch across the screen, you've got to add some information to the ListView (or it's style).

First, you need to provide the HorizontalContentAlignment value to read "Stretch" Then, you need to set the horizontalAlignment of the ItemContainerStyle to "Stretch" as well.

But it's still not quite right. If your data extends beyond the size of your window, your controls won't wrap as expected (say you're binding to a TextBlock and you expect all the text to be displayed in the control's viewable space). To fix this, you need to tell the HorizontalScrollbar to be Disabled (which in the Listview is accessed via t he ScrollViewer.HorizontalScrollBarVisibility property).

The final prettification that you'll need on the control is if you want alternating background colors. This takes two steps. The First is to add the AlternationCount to the ListView. The second is back to the ItemContainerStyle, and you will need to add a Trigger on the ItemsControl.AlternationIndex. When the Value is 0, give it one color, when it is 1, give it another. Note that you can have as many alternation colors as you'd like.

The XAML for the "prettification" is:

<ListView Name="lsvAddresses" MinHeight="250" MinWidth="250"
HorizontalContentAlignment="Stretch" AlternationCount="2"
ItemTemplate="{DynamicResource AddressRepeater}"
Style="{DynamicResource ListViewRepeaterStyle}"
ScrollViewer.HorizontalScrollBarVisibility="Disabled"
SelectionChanged="lsvAddresses_SelectionChanged"
>
<ListView.ItemContainerStyle>
<Style TargetType="ListBoxItem">
<Setter Property="HorizontalAlignment" Value="Stretch"/>
<Style.Triggers>
<Trigger Property="ItemsControl.AlternationIndex" Value="0">
<Setter Property="Background" Value="#FFFFFC"></Setter>
</Trigger>
<Trigger Property="ItemsControl.AlternationIndex" Value="1">
<Setter Property="Background" Value="WhiteSmoke"></Setter>
</Trigger>
</Style.Triggers>
</Style>
</ListView.ItemContainerStyle>
</ListView>

So, in the end, once we put all these things together, we'll get ourselves a window with a ListView which displays data like this:


And of course, the Source can be downloaded here

Monday, July 20, 2009

B&N Draconian eBooks

I want to get on the eBook train. I want to get an eBook reader, and have digital copies of all the books that I read that I can carry with me every where I go. I want this because I read. A lot.

Likewise, I spend a good amount on books every year. It’s a category in my family budget. That’s how serious I am about the things.

So, imagine my joy at hearing about another big-name book seller that is creating an eBook-store.

Especially after the idiocy earlier this week of Amazon stealing files from users of its Kindle service who purchased an item in good faith from their store.

So, of course the first thing I do is head over to Barnes & Noble’s new ebooks site—and proceeded to extinguish the joy.

First off, there is no hardware device there at all. But, I’m actually fine with that.

Or I was until I realized that they’re pushing their proprietary software in order to read the eBooks which you “purchase” from their site.

According to their FAQ, these books are in PDB or PRC formats—a format which have traditionally been used in PALM devices (and more recently the Protein Database format). So, they’re already starting with their own file format attached to their own software.

Software which is only available for a handful of devices, excluding the Linux OS.

All this said, I LIKE the reader software. It’s intuitive and easy to use.

B&N eBook ReaderThere are even themes, so that you can easily change the paper to something that works well with your eyes.

But ultimately, that doesn’t help me.

If I am going to use eBooks I want them easily portable. Otherwise, I’ll continue to purchase actual books.

Let’s be clear here. I don’t want a BlackBerry, and have no intention of getting an iPhone/iTouch (at least until they’re undocked from the AT&T service). Therefore my option to read B&N eBooks is to cart around my desktop and monitors?

Because you cannot use it on an eBook reader such as the Cool-er.

Or at least that’s the impression one receives. I’ve not taken the trouble to attempt to convert those PDB files that come with the reader to find out though.

But on the subject of that, I have to say, that’s the most disingenuous free give-away that I’ve seen in a long time.

Free Offer from B&N

If you look closely, there are a possible of 6 books that Barnes
& Nobles will give you FREE with their reader.

There’s two bundled with the software and then an additional four for signing up for an account.

That sixth on the list I found extremely humorous considering the fact that I don’t own a device that I can mobily carry these files (or their bloat-ware) on.

But the other five? Those I laughed aloud over.

They are as follows:

  • The Last of the Mohicans
  • Sense and Sensibility
  • Dracula
  • Pride and Prejudice
  • Little Women

As I was reading that list, it struck me. Every book on that list has been around for over a century.

So, I headed over to Project Gutenberg and took a minute to look up, and find every one of them.

SpaceBalls - Perriair Freebies are easy to give away when they cost the company nothing. It’s the silliest thing I’ve read about since that canned air thing from over in Japan.

How hard is it for these companies to understand this?

I don’t want DRM. I don’t want to use your software.

If I want to write my own software to read the eBooks I purchase, I want to be able to do that without having to break the DRM, and make myself a criminal (and is that not a fun thought? Creating a piece of software to access files you paid for can be a felony).

Otherwise, what’s the use.

Why should I bother with their bookstore and their DRM?

After all, I can easily make myself a Full Auto Book Scanner and media-shift hardcopies to PDF (or using OCR, ePUB) formats.

Barnes & Noble’s, and Amazon’s and anyone else who is trying to get me to purchase eBooks, goal should be simple. To get me to think it not worth the time and effort to media-shift the books I purchase into digital media.

Ultimately that means that I need quick and easy access to the files I purchase, in an open format that I can move between devices.

And they need to be priced reasonably for what amounts to an electronic file.

After all, I switch back and forth between my iPod and my Zune for music/video consumption. I refuse to be tied down to a single device, and I refuse to allow my media library to be treated the same way.

The lesson here: I’ll buy analog and convert before I buy DRM.


Edit 7/22/09: After a few emails back and forth with the Barnes & Nobles support, I got them to verify that you are only able to use their eBooks with their software

Another fine example of a company that hates (and is actively hostile towards) its customer base.

Sunday, July 19, 2009

WPF Datagrid Items Refresh, part II


Back in January, I posted a short entry about refreshing the datagrid in WPF. Which has gotten a number of questions regarding it. Well, one from Mykhaylo Khodorev made me go look some more into it. Apparently, the use of a generic IEnumerable object was failing this little trick. First, some information I've learned since then.

If the collection you're using to assign to the ItemSource attribute of your datagrid is an ObservableCollection (or inherits from it) any changes to the collection are automatically propagated to the grid. Secondly, the process described works while using a Generic List collection.

But, just so that we're clear on exactly what's happening, I'm placing my code that I used here. So here's the XAML:



<Window x:Class="Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:wpftk="http://schemas.microsoft.com/wpf/2008/toolkit"
Title="Window1" Height="300" Width="300" Initialized="Window_Initialized" >
<DockPanel>
<StackPanel Orientation="Horizontal" DockPanel.Dock="Top">
<TextBox Name="txtAddItem" Width="50" MaxWidth="50" />
<Button Name="btnAddItem" Content="Add Item" Click="btnAddItem_Click"
HorizontalAlignment="Right" />
<Button Name="btnRefresh" Content="Refresh" HorizontalAlignment="Right" Click="btnRefresh_Click" />
</StackPanel>
<Grid>
<Grid.ColumnDefinitions >
<ColumnDefinition Width="1*" />
<ColumnDefinition Width="1*" />
</Grid.ColumnDefinitions>
<wpftk:DataGrid Name="xGridObservableCollection" Grid.Column="0" AutoGenerateColumns="False">
<wpftk:DataGrid.Columns>
<wpftk:DataGridTextColumn Header="Item" Binding="{Binding Item}" IsReadOnly="True" />
</wpftk:DataGrid.Columns>
</wpftk:DataGrid>

<wpftk:DataGrid Name="xGridList" Grid.Column="1" AutoGenerateColumns="False">
<wpftk:DataGrid.Columns>
<wpftk:DataGridTextColumn Header="Item" Binding="{Binding Item}" IsReadOnly="True" />
</wpftk:DataGrid.Columns>
</wpftk:DataGrid>
</Grid>
</DockPanel>
</Window>

And here's the VB.Net CodeBehind for that XAML:


Imports System.Collections.ObjectModel

Class Window1

Private Sub btnAddItem_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
Dim x As String = Me.txtAddItem.Text
Dim b As New ItemClass(x)
_itemList.Add(b)
_items.Add(b)
End Sub

Private _items As New Items
Private _itemList As New List(Of ItemClass)

Private Sub Window_Initialized(ByVal sender As System.Object, ByVal e As System.EventArgs)

Dim x As Integer = 0
Do While x < 5
Dim i As New ItemClass("Auto Gen Item # " & x)
i.Value = x
_itemList.Add(i)
_items.Add(i)

x += 1
Loop
Me.xGridList.ItemsSource = _itemList
Me.xGridObservableCollection.ItemsSource = _items
End Sub

Private Sub btnRefresh_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
Me.xGridList.Items.Refresh()
End Sub
End Class

Public Class ItemClass
Implements System.ComponentModel.INotifyPropertyChanged

Private _v As Integer
Public Property Value() As Integer
Get
Return _v
End Get
Set(ByVal value As Integer)
_v = value
OnPropertyChanged("Value")
End Set
End Property
Private _item As String
Public Property Item() As String
Get
Return _item
End Get
Set(ByVal value As String)
_item = value
OnPropertyChanged("Item")
End Set
End Property

Public Sub New(ByVal itemtext As String)
_item = itemtext
End Sub

Protected Overridable Sub OnPropertyChanged(ByVal propertyName As String)
Dim e As New System.ComponentModel.PropertyChangedEventArgs(propertyName)
RaiseEvent PropertyChanged(Me, e)
End Sub
Public Event PropertyChanged(ByVal sender As Object, ByVal e As System.ComponentModel.PropertyChangedEventArgs) Implements System.ComponentModel.INotifyPropertyChanged.PropertyChanged
End Class

Public Class Items
Inherits ObservableCollection(Of ItemClass)
End Class

Now, please realize that this is not 100% perfect (or production worthy) code, but rather stuff I tossed together for the provenance of this point.

What I've done is built a WPF application which holds two grids, one whose ItemsSource is pointed towards an ObservableCollection while the other's ItemsSource looks at a List. This is to show that when new items are added to these collections, one will automatically update while the other requires an explicit call to the datagrid's Items collection in order for the UI to be updated.

Now if you go back and look at Mykhaylo's code in the comment posted, you'll notice that the values are being populated via an IEnumerable accessed via a LINQ query of a DataTable in a DataSet.

Additionally, when Mykhaylo adds new items to the datatable, it appears in the datatable but not in the datagrid.

There's a reason for this.

And that is because the datagrid and the datatable are not bound together.

Here is mykhaylo's code:
AccDataGrid.ItemsSource = from account in _myDataSet.Accounts select new Account(account);
Notice that what's happening is that Mykhaylo is basically using LINQ to generate a new List which only exists as an object in the Datagrid's ItemsSource property.

Therefore when new items are added to the _myDataSet.Accounts object, there's nothing anywhere telling the datagrid about those objects.

The easiest solution would be to bind the datagrid in this manner:
AccDatagrid.ItemsSource = _myDataSet.Accounts
Then when items are added to the Accounts datagrid, it becomes a simple matter of using the AccDataGrid.Items.Refresh() command to update the UI.

Wednesday, July 15, 2009

Class Libraries and "ApplicationDefinition"

Oh the joys of Visual Studio IDE.

99% of the time, I love this tool, but there's that other 1% of the time, when I could happily smack it six ways to Sunday.

Well today I was working in a Class Library of one of my projects, and all of a sudden, it stopped compiling.

Just refused.

Which of course broke things up and down the solution, as this particular library is my Data Layer.

So, I peruse the Error List, and come across this two errors:

Error 1 Library project file cannot specify ApplicationDefinition element.
Error 2 The project file contains a property value that is not valid.
With absolutely NO reference to where, how, what, or even which object in the project that is generating these two wonderful ambiguous errors.

So, I turn to my good friend Google.

And got a lot, and a lot of results. Most of them talking about removing APP.XAML from the project (which I didn't have one in this project) or just flat out not knowing what it was/is/or means.

It's at this point that I start rubbing the bridge of my nose. I need to test my changes, and deploy for a customer display by this afternoon.

Starting to worry a bit, I removed the new dialog I had created (even though it was a WinForms dialog as opposed to a WPF window) and attempted to recompile.

Still with the error.

So, I change my search parameters slightly, and delve deeper into the mysterious workings of Google Search Results.

And found this link (translated into English here).

The basics talks of dragging objects from a WPF Application or a WPF Library project into a standard project. Apparently, the BUILD for these items will be changed from whatever they are in the WPF-based project to an "Application Definition" build command.

Which of course breaks a library, as only executable projects can build with Application Definition--and it's usually the App.XAML file which recieves this build type.

Pondering this, as I had not dragged nor dropped anything into the project, I remembered that I had added an object to this library; but instead of creating it new, or dragging & dropping, I had cut-and-pasted it into my resources folder.

So, I look at the build information for this image, and lo-and-behold, it had been changed to ApplicationDefinition.

I quickly modified it back to Resource like it should have been, and then compiled.

Eureka! And not the tv show, or the anime character, but rather the original intent.

Compilation proceeded, things started working, and life was good.

At least until I ran across my next bug....

Thursday, June 25, 2009

Threading Needles and the BackgroundWorker

I'm not going to lie, sometimes, I'm just so irked at myself for doing something just... stupid, or worse, implementing something that I've not gotten a complete and thorough understanding of.

And yes, I've implemented code structures which I've not understood in their entirety many, many times. I'm a consultant, it's par for the course. Half of my job is just getting a firm enough grasp of various coding scenarios that I can interpret and fix whatever error is cropping up.

Regardless, I've been so engrossed in WPF and just ensuring that things work as I expect (and hope) that I've not paid too much attention to the exact specifics of what was happening. A fact which slapped me in the face, quite hard, when I realized my current application was taking upwards of 8 seconds to save a page and advance to the next one.

8 seconds for a windows-based application, that runs entirely on the client-side of things, is a heck of a long time. It's on the far side of truly usable, IMO.

So of course, I dive into the code and restructure and reorganize and refactor and all these other nifty terms. Whole structures are modified and I shave the time which the page takes to process from 8 seconds down to 2. At least that's the time that is being reported to me. The application was still noticeably showing lag bewteen shifting views. So I watched the second hand on my watch, and despite the 2 seconds being reported, it was closer to 8 on switching.

So, I closed my eyes, exhaled, and then worked until 1 in the morning hunting for the reason.

And I found it.

The page upon advancing, reported that fact to the parent form, which then passed a command to a secondary form to refresh the overview of the structure being operated upon. It was this refresh which was taking nearly 8 seconds.

But that confused me, as I thought that I had made that multi-threaded.

And while doing various readings is when I realized the cold hard truth about a bit of technology that I had utilized.

Basically, I had mis-read the role that Dispatcher has in multi-threaded systems. My thought, and I'm still not sure how I came to this one, was that by Invoking a function against the Dispatcher, it performed its tasks in a separate thread.

Boy, was I wrong.

And in case anyone is wondering, the 60,000 foot view is that the Dispatcher.Invoke, attaches the given function to the UI thread to allow safe access of the form's controls.

I felt the need to bang my head against the closest hard surface. Luckily for me, my desktop is somewhat covered in papers and books which aren't quite as hard as the wood surface itself.

So, with the cold, hard realization that Dispatcher.Invoke wasn't doing what I thought (which was spawn off a method to a new thread), I went forth to find something that would do it effectively.

And thus, I found the BackgroundWorker class.

This is my new best friend.

Here's how the BackgroundWorker works. You instantiate it as a new object, and then you set up a couple of event handlers for it. There's three possibilities:

  1. DoWork
  2. ProgressChanged
  3. RunWorkerCompleted
The way those work are simple, in the DoWork handler, you perform the actual functionality that you're trying to do. If you wish to report progress back to the user, then you tell the object that you want to report some progress (which takes an integer representing the percentage down, and an object that can be basically anyother data you desire).

The BackgroundWorker reports being done whenever it reaches the end of the DoWork handler. And again, there's an object that gets passed around from one portion to the other for you to play with.

Then of course, I ran up against the need to be able to spawn up multiple instances of these things. Well, an array of BackgroundWorkers sounds like a horrible, horrible idea, so I once again turned to my research skills.

And thus found the Thread Pool.

Again, it's an object which you can create, and then utilize to spawn off a function (in this case passed as a delegate) in a separate thread.

The thing about it is, that it's a smart object and watches how much the CPU is being utilized and determines whether to create a new thread, or wait for one of its current ones to finish as you add things to it for processing.

The bad news is that there's nothing that tells you when an item is finished. Therefore, you must use Dispatcher.Invoke in order to pass information back to the user interface.

Now, watch that I've learned all that hassle, that Microsoft will go and change everything in .NET 5 or something silly like that.

Thursday, April 30, 2009

Sometimes I Wonder...

You know, I sometimes wonder about the intellectual capabilities of some of my peers. Specifically, my peers in the programming/software industry. Or at least their Marketing Masters.

The reason I wonder is that on occasion, you'll get things just named stupidly. Utterly, and hopelessly stupidly.

A while back, one of my friends passed a contact to me that needed some help fixing his website. He had gotten it built, and then the development company he had used more or less disappeared on him. So, I got his issues straightened out, and his site was back in business, and he was happy.

Fast forward just under two years to this evening. I got a call from him, and it appears that his site was experiencing some major issues. For whatever reason, ASP.NET was not loading the DLLs it needed to run, and that was throwing configuration errors, and stopping processing on that yellow-screen of annoyance that anyone who has developed in ASP.NET has seen at least once.

The first thing I did was FTP into the server to ensure that the DLLs still existed, which they did. Then I checked on ensuring that the web.config file was well formed (though if it wasn't it wouldn't have worked ~20 months ago when I was working on it).

Finally, deciding that maybe the DLLs in question had just gotten corrupted or expired or something, I found the first one, named FreeTextBox.dll and went hunting. It didn't take long for Google to turn up the results, at which time, I found myself kind of staring at the screen in abject... flabergastion.

The reason why is captured there in that image perfectly.

This is a product named FreeTextBox with the tagline "The no. 1 free ASP.NET HTML Editor."

And then you have two different licenses that you can purchase to legally use this control, one costing roughly $50 and the other $200.

Now sure, the thing is a free download, and it's only "extra" features that you're getting when you get a license, but you've got to really read the website to discover that particular salient factoid.

Just the sheer... gumption one has to charge for a product who uses FREE in its very name is just... astounding.

Ah well, I'm off to see if I can track down the cause of those missing DLLs...

Tuesday, April 28, 2009

Finally Released the new version of eComic

I’ve pushed out the latest release of eComic, and it is available for download here: http://ecomic.codeplex.com/

This is a fairly drastic overhaul of the entire system, as I made some changes on how I want things to work.

ecomicscreen001The primary change is that I’ve decided against allowing people to open multiple comic archives within a single instance of the application (though multiple instances can run at one time). This is coupled with a much more efficient memory management system for the images which only loads them as they’re needed for the thumbnail display.

Those that download the source code will notice that there’s a new project in the system, one which will produce a second application whose sole purpose will be the creation of the various archives.

The final bit of interest in this release is that I’ve finally got the Graphic Novel Archive (gn*) format up and working. What this is, is an archive of the CBR archives. The files can be either read sequentially based upon their names, or a manifest file can be included (XML) which lists the order in which they should be read.

I even generated an XSD for it:

<?xml version="1.0" encoding="UTF-8" ?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="chapter">
<xs:complexType>
<xs:attribute name="sequence" type="xs:NMTOKEN" use="required" />
<xs:attribute name="fileName" type="xs:NMTOKEN" use="required" />
</xs:complexType>
</xs:element>

<xs:element name="graphicNovel">
<xs:complexType>
<xs:sequence>
<xs:element ref="chapter" maxOccurs="unbounded" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>



I’m excited about the GNR format as these reading lists concepts are the primary reason that I decided to create eComic. As an aside, the editor portion of the software will also generate the GNR files.


Thos looking for content for eComic can find free (and sometimes in the public domain) copies of comics. Here’s two sites:

Thursday, April 9, 2009

My New PC!

Well it finally arrived on Tuesday evening. I've gotten my new pc, and all I have to say is:

POWAH! UNLIMTED POWAH!!!
I know, I know, I'm an utter geek. But hey, this thing is awesome. It has 8GB of RAM (DDR3 SDRAM @ 1066MHz), and an i7-920 Intel processor (8MB L2 Cache, 2.66GHz) featuring 8 cores of sweet, sweet bit-crunching goodness. Couple this with an ATI Radeon HD 4670 video card with 512MB of onboard memory dedicated just to the GPU, and you can imagine the utter...snappiness with which this thing rocks at.


Just look at that Process tab. Good times.

Sunday, February 15, 2009

Sometimes, I'm Just Soooo Smart

The CD-ROM on my PC is dead. It's been dead for a couple months, and I've not really thought about it, as I've been planning on getting a new one soon. Well, it became an issue, as I purchased some tax software to do my taxes--and when I went to install it I came up hard against that issue.

Not a problem, I'm a geek--more importantly, I'm a software geek.

As such, I have all sorts of things at my disposal in order to bypass this--namely, a bit of software called "Virtual CloneDrive." What this guy does is it allows me to mount a ISO image as virtual drive.

So, being the geek that I am, I went down to the office, and used the CD-ROM on my PC there to make an image of the disc and shoved it onto my thumb drive so that I could install it at home.

But, that's not the good part of my story for today; that's not the part of the story which warrants the title for this part.

No, the good part is that AFTER all that hassle, of making a special trip, across town to my office (a nearly 30 minute drive mind you) on the way home, me an the wife stopped at Circuit City to check out if there were any good deals left as there were just a few days before they closed for good (two Saturdays ago, I got a 1TB Network HDD for half price).

Well, while there, I found a game that I had thought interesting, but not worth the $40 price tags that games usually come with. Well, it was only a bit over $6, and it well was worth that particular purchase price, so I got it.

So, then me and the wife left, and she started teasing me about purchasing the game. Mainly, taking the stance that it's been a few months since I've bothered playing a game, and here I was getting a new one.

At which time I was about to tell her that I hadn't played a game recently because my drive was dead.

And the realization of the entire reason for the trip to the office that morning once more slammed into me.

It was enough to make me literally stumble in a step.

All, in all, a sequence of events which my beloved wife found infinitely amusing.

Monday, February 2, 2009

Redesigning the Road System

This little design of mine scares me; but the sad thing is that I can see it fully realized at this moment, with the current levels of technology. What am I talking about? Why self-driving cars of course.

Consider these pertinent facts:

  • Update-Ready GPS devices consistently tell us where we are, and where we need to go
  • WIFI Network bandwidth is in massive supply thanks to the recent influx of bands due to the broadcast television turnover to digital
  • Cars come with the following technology today:
    • onboard computers
    • radar (that fun little beep when you back up)
    • video cameras
This design will need to make a few modifications to the cars rolling off the presses as new vehicles (but the components needed could just as easily be added in as aftermarket devices) as well as changes to the infrastructure. The good part of this change is that the devices would not affect or otherwise impair existing vehicles--though the system would run more efficiently if every vehicle was tied into the network as a whole.

Changes To Cars
The main changes to the car, outside of the computer brain needed to interpret the incoming data and to do the actual driving (which isn't that far fetched if one takes a look at how well airplane auto-pilots perform these days) would include additional radar devices, RFID readers (4 distinct ones) and a wireless network adapter.

The Radar devices are there to tell the car if something is in front of it, or behind it and how close. Quite simple; after all, no matter how many vehicles you get onto the network, a tree falling across the road can still kill someone if their car hits it at 55MPH.

Now, the RFID chips, these would need to be positioned at the four corners of the car, that way the computer can tell the direction the car is traveling in, and basically ensure that it remains within the lane. We want all four corners to have them so that the car will know when it has completed moves such as merging into a new lane, or turning a corner. More about the RFID chips later though.

The final addition to the cars would be a wireless network adapter. Basically, this would be used to allow the car to talk to all the other nearby cars on the road. If the car intends to move over a lane, it sends out the signal across the network, and pings the other vehicles on their precise location so that it knows whether it needs to speed up or slow down, or just to go ahead and merge into the next lane.

Changes to the Road
The primary change to the roadbed is that ever one of those little reflectors would need to be a RFID chip with the exact lat/long coordinates embedded within it. Basically, it would be a little chip that constantly screamed "Here I am!"

Another change, that would not be needed, but would be useful, would be to have network cabling running down the entire branch of the road. Then you could routine put up a wireless repeater, which would allow the vehicles to ping the internet as they drove. This would be useful for such things as real-time weather updates for the next X miles of road, or to allow the driver to send a command to their online stove to start cooking, or even send an email or an SMS message to the driver's spouse or kids or parents when the car gets X miles from the house.

Drawbacks
As in anything, there are drawbacks. The little things like people will still do stupid things in their cars, and older vehicles that have not been upgarded would not be a part of the vehicle's network mesh (though every smart-car would be able to locate and broadcast the location of the dumb-car via it's radar systems).

Beyond that, the issues I have with this design are privacy. First and foremost, is that it's a small change in those RFID chips to go from Broadcast to Receive/Broadcast. Imagine, every day, a thousand cars fly past those little devices. Now, if each car is network aware, then that means each car has a MAC address. A MAC address that would probably be tied into the VIN number and license plates of the vehicle. Now, if that chip is reading that MAC address as the cars go by, then someone with a reader can come by and snatch up all of the traffic metrics for a road. While such data can be useful (for example, accurate counts of high-traffic areas, and points of time which drivers experience excessive congestion) it can also be put to nefarious uses (tracking where people go).

The next issue is the fact that each of these cars is a small, mobile network, attached to a greater, wide-area network which is the entire set of cars in a given area. What's stopping the government from monitoring everything you do in your vehicle, from a remote point.

I know, and hope, that this was nothing more than an exercise in intelligent design and capabilities, but I actually do kind of fear this system getting installed and put into use. After all, as I said, we have the technology as is to do everything I listed in this article.

Monday, January 26, 2009

WPF & The Data Context

I love me some WPF, though I have to admit that when I first started, I did things wrong. I held onto my old-fangled way of doing things as much as possible, and only hesitantly took steps out into the deep waters where the big fish play.

Which means it wasn't until I started slapping my head up against the datagrid that I first really looked at the Data Context object.

Afterwards, I thoroughly smacked my forehead repeatedly. Imagine it, thousands of lines of code, and a good bit of it can be culled if I had the time or energy to go retrofit it all with Data Contexts.

Now, for the uninitiated, a Data Context is just as it says: an object which holds the data that a WPF form uses as a display source for its UI elements.

So, looking at the eComic application; every CBR file can potentially have a xml fragment/document within it which contains details about that CBR file. This would contain information such as number of pages, page descriptions, artists, writers, series title, issue title, chapter number, volume number, etc. All the important meta data, that's truly unimportant to reading an actual issue of a comic.

So, what we have is this standard kind of programming structure: UI - Business Logic - Data Store. These three things are supposed to work together, while remaining within their own domains and never quite touching. Now, the way I had always done this in the past is that either the UI was aware of the Logic layer or the other way around.

In my mind, they had to be in order to talk to one another. It just made sense.

Enter the joys of WPF.

What happens here is that I have my UI and Data Store as usual. Then I have my business logic, and within it is this Data Context object which holds the data from the data store that I am currently interested in.

Taking eComic, let's keep it simple, and only deal with Series Title and Issue Title.

What happens is that I'll create a class, which exposes properties for each of those elements. I can either have the class populate itself, or populate it elsewhere, that's fundamentally irrelevant to the discussion. All we need is the class itself.

The next step is that when we're building our UI in XAML, we need to utilize the Binding command.

So, if have a Series Title textbox that I want to display in the UI, I'd have this code:

<Textbox Name="SeriesTitle" Margin="5,5,5,5" Text="{Binding SeriesTitle}" />

Now, here's where it gets real fun. All I have to do to get the SeriesTitle from my Data Context to that UI is just assign it to that SeriesTitle's parent container's Data Context. Any changes to the textbox will instantly be promulgated back to the Data Context so I never have to manually check the textbox or any other UI element. All I have to look at is the Data Context. This is accomplished via the PropertyChanged event, which is brought to us courtesy of the INotifyPropertyChanged interface found in the System.ComponentModel Namespace.

So, I'm using the Data Context, and now I have to determine if I need to save changes to the data store based on user input or not.

It is important here that I not waste CPU cycles in saving data that hasn't changed. So, this is where the PropertyChanged event really comes into play.

First off, since we were smart, we implemented access to the PropertyChanged event via a OnPropertyChanged method which takes in the name of the property being changed.

Once the event has been risen, then we would set an IsDirty flag to true. Then, we can check that flag to determine if we need to attempt to save the data or not.

This Data Context example can be viewed here:


   1:  Public Class eComicDataContext

   2:         Implements INotifyPropertyChanged

   3:   

   4:  Private _isDirty As Boolean

   5:  Private _series As String

   6:  Private _issue As String

   7:   

   8:  Public Sub New()

   9:         SeriesTitle = ""

  10:         IssueTitle = ""

  11:         IsDirty = False

  12:  End Sub

  13:   

  14:  Public Property SeriesTitle() As String

  15:         Get

  16:                Return _series

  17:         End Get

  18:         Set(ByVal value As String)

  19:                _series = value

  20:                OnPropertyChanged("SeriesTitle")

  21:         End Set

  22:  End Property

  23:   

  24:  Public Property IssueTitle() As String

  25:         Get

  26:                Return _series

  27:         End Get

  28:         Set(ByVal value As String)

  29:                _series = value

  30:                OnPropertyChanged("IssueTitle")

  31:         End Set

  32:  End Property

  33:   

  34:  Public Property IsDirty() As Boolean

  35:         Get

  36:                Return _isDirty

  37:         End Get

  38:         Private Set(ByVal value As Boolean)

  39:                _isDirty = value

  40:         End Set

  41:  End Property

  42:   

  43:  Public Sub ResetDirtyBit()

  44:         Me.IsDirty = False

  45:  End Sub

  46:   

  47:  Public Sub OnPropertyChanged(ByVal propertyName As String)

  48:         RaiseEvent PropertyChanged(Me, New PropertyChangedEventArgs(propertyName))

  49:         IsDirty = True

  50:  End Sub

  51:   

  52:  Public Event PropertyChanged(ByVal sender As Object, ByVal e As PropertyChangedEventArgs) Implements INotifyPropertyChanged.PropertyChanged

  53:  End Class

Blog Widget by LinkWithin