Monday, October 31, 2011

eComic 2012

Well, I'm in the process of ripping apart eComic and rebuilding it from scratch.

The two biggest aspects that will be changed (from the user's POV) are the inclusion of comic properties and library functions. 

eComic 2012
Well, not so much library functions as card catalog functions. Additionally, I'm modifying the UI to allow multiple tabs of comics being open (again--I keep going back and forth on this), and structuring it around a UI layout which I'm highly familiar with: Visual Studio.

As you can see from the image to the right there, it contains the tabbed interface, plus allows for the LIBRARY to reside in an tab all its own, as well as the PROPERTIES element to be always available.

I'm also keeping the ribbon menu, as I really like Ribbon menus. I'm trying to decide if I want to continue using the circle style application menu (as it exists in the previous version), or go with the square one as shown in the image.  I'm not going to lie, I made the icon with the thought that it would fit within the circle-style application menu.

The first aspect of a LIBRARY system that I implemented is the FAVORITES screen. This will also be the start-up screen when a user comes into the application. Each favorite is click-able, and when clicked loads the associated comic into the reader.

The FAVORITES screen will also be the basis for the lists screen (and search results screen) once I get all the library tools built.

I also made the system much more MVVM friendly. 

In the previous version, MVVM was there, but there was still a lot of manual control of the displayed page. Basically, the Reader control was responsible for both loading the pages into the ComicBook class, and controlling which page was selected at any given moment.

I've now made that all controlled via the ComicBook class via Databinding to the relevant list controls.  Sure, I can manually modify that with a next/previous and Go To functions, but the primary navigation is now controlled entirely via binding processes.

My next steps are to get the LIBRARY working, and to determine if I want this system to watch files, or if I want it to just control the files that are associated with it. There's pros/cons associated with both paradigms, and I'm not sure which is best. 

Sadly, I fear I may have to install iTunes and waltz through it, and figure out what it's doing with its media functionality.

The Kindle software makes more sense (everything's in its folders) but it also has the benefit of being the front-end of a store. The primary way you get content into Kindle is by purchase via KindleStore, and that's just not the case for my software; as the primary way you'd get content for eComic is to either a) download it, or b) media shift the comics that you've purchased.

Additionally, the Zune library works this way, by creating folders and what not on an as-needed basis in a way that makes sense to the software. And that sense tends to be: \\\.mp3

I'm not sure that would work for me, as I have so many fields to consider: Series, Issue Title, Alternate Series Title (i.e. Story Arc Title), Volume, Book Number, Artist, Writer, Publisher and finally Publish Date.

Oh well, I'm sure I'll figure out some sort of solution.

The New Google Reader

I use Google Reader. I've always enjoyed it immensely, and have used it extensively since it was first released.  I've tried other RSS Readers, but frankly, none of them offered me the flexibility which Google Reader provided.

Then they added "social" features. Users could "Like" and "Share" RSS items. While I never used the Like feature (and would install a browser add-in which would hide it) I used the Share feature to share links on this blog.

Now, I'm not up in arms about the loss of this feature, though I will miss it. Why I'm posting this, is that I received the updated UI this afternoon, and noticed that in the Tips and tricks, there was a big old article on how to share SHARED items on a blog.

I mean, talk about rubbing salt into the wound of those 10,000+ users that were begging (and signing petitions) Google to keep that particular feature...

Monday, September 19, 2011

Trello Broke My Brain...

Here's a secret, whenever I stumble across a web application that I think is awesome, I look at its source-code in an attempt to figure out how they're doing it. Quite often, one can find quite a bit of information by looking at various JavaScript elements, not to mention how the various HTML elements are named.

So, of course, when I discovered Trello, I had to take a gander. And I must say that I do love Trello. It's the first task organizer that I've actually wanted to use, and as one can see, I'm hoping to generate a nice set of tasks for eComic so I can actually keep track of my development efforts there.

At which time, I discovered that Trello doesn't appear to have any of the typical hallmarks by which I would get insights into how things are built.

Case in point, there wasn't really any elements on the page which contained little things like NAMES or IDs.

But, it did provide me the name of a few of the JS libraries that it used. I found the typical JQuery and JQuery UI libraries, and even the somewhat ubiquitous (at least for Frog Creek software products) JSON2 and MARKDOWN libraries.

But there were a few that I had never heard of at all:
  • backbone.js
  • highcharts.js
  • underscore.js

Which is a list which sent me stumbling through the depths of... some developer-appropriate underworld.

Highcharts.js was an interesting find, as it's interactive charting object for web projects. But, there's nothing that well, mind-blowing hiding in its background.

That's where those other 3 classes come into play.  Underscore & Backbone work hand-in-hand to provide Model-View-ViewModel and Templating into web projects, while Socket.Io.js' whole purpose is for the development of real-time apps.

I read through these elements, and considered how they were integrated in LAMP environments, and then began pondering how I could integrate them into an ASP.Net/IIS world.

What I had, was that I wanted to implement the Model-View-ViewModel scenario, coupled with Real-time collaboration without having the browser POLL for the data changes. Which, let's be honest, is not exactly the most easy thing to do in ASP.Net.

The first thing I went after was the Model-View-ViewModel situation. While reading up on these elements, I discovered the KnockOut library.  This looked quite happy, and it was using the JQuery Templating engine, which I had previous experience with. Thus, I ran through its tutorials, and got my code generating templated HTML based upon a view model that had its data loaded by a secondary call to a web service which returns a JSON object.

Well, that had me dancing for a few days, especially after I got things like sorting working the way I would expect for it to on the application.

Which is when I jumped off the deep end. 

The other half of my task was to generate a real-time colloboration system, which basically entails the SQL SERVER having to notify the associated web-pages about changes to the specific set of data that is being displayed on a given web page.

The theory is that if I had two people looking at BOARD A, and three people at BOARD B, whenever BOARD A's data was modified, I wanted those who were looking at BOARD A to be notified (via the data being changed), but NOT those that were looking at BOARD B.

And I was doing this in IIS and .NET without the benefits of NODE.js and Sockets.Io.  In a few years, when websockets become more common (sadly, IE9 doesn't even have these things) then the way I did this might change. After all, HTML5 websockets are designed to provide this concept. Sadly, we're not there yet.

But, what we do have ARE Asynchronous HTTP Handlers, and the SqlDependency class.

Basically, what's happening, is that I have an asynchronous web service which I call when I load the page. This web service spawns off a new thread which registers a SqlDependency on the relevant query for a board (and based on last UPDATE time).  When the result set for that relevant query changes (i.e. values are MODIFIED!)  then it raises an event, which completes the asynchronous call to the handler.

This provides me the ability to have my SQL Server send out notifications to my HTML pages (which are stateless remember!) that its data has changed.  Thus, I have the secret to near-real-time collaboration!

Wednesday, August 31, 2011

Win8 Explorer's New UI

The IT Geek-world is in a something of an uproar over Microsoft's big reveal on Sunday (8/29) of the improvements that they're baking into Windows 8's Explorer.

Explorer has--at Microsoft's own words--been the foundation of the user experience of the Windows desktop.  And likewise, they've gone to considerable lengths to provide the first substantial change to Explorer in years.   There's a few... not necessary issues or concerns... but something like that which I, as a technologist, and specifically as a MICROSOFT/WINDOWS based technologist, have about not just the new UI, but what this new UI means in terms of the data provided by Microsoft at the linked to post above.

But first, as a Windows 7 user, there is at least one aspect of these UI changes with which I whole-heartedly agree, and that is the return of the UP button. I did not realize just how often I used that simple little command option while performing file-management tasks until it had disappeared. I think that was my biggest stumbling block when switching my OS over to Windows 7.

The second thing about Explorer and the Windows-file system that makes me happy is that there will be native support for ISO's in the Explorer. This is good news, especially for folks like me (and my company) who routinely store ISO's on the network for ease of backup and maintenance.

Now, the ribbon menu system itself, I'm less concerned over than most of the other IT blogs out there. I like the ribbon menu, and think it's a great mixture of a traditional menu and command bar that is ultimately more useful for end-users.

But I don't think that's going to be the case here. By Microsoft's own metrics over half of the entry points for Explorer-based commands is the CONTEXT menu (54.5%) with hot keys coming in as second (32.2%). That leaves less than 15% for the command bar and menu bar (and the command bar got 10.9%).  What this tells me is that users already aren't using the command bar or the menu. 

Oddly, Microsoft's reaction to this, is to make it--at least at first glance--more complicated.

Microsoft's Simplifies  Explorer
 Luckily, this apparent complication does provide some little pluses. The ribbon provides roughly 200 commands quickly and efficiently to the user. More importantly, the Quick Access Toolbar, allows for customizations for those most important commands.  But again, this is odd, since roughly 82% of all commands are as follows:
  • Paste (~20%)
  • Properties (~11%)
  • Copy (~11%)
  • Delete (~10%)
  • Rename (~8%)
  • Refresh (~7%)
  • Cut (~7%)
  • NewMenu (~4%)
  • CommandBar (~4%)
  • New (~2%)
When you couple that list with the fact that over half of the commands are accessed via the context menu, I as a technologist is left wondering exactly what's the point.

And the thing is, that this is aimed at the STANDARD USER. This is not aimed at power users, or network admins. In fact, the original blog post describing these changes goes out of the way to indicate as much, by pointing out that the roughly 200 commands found in the ribbon menu will now all support keyboard shortcuts.

Ultimately though, I understand the uproar, while not understanding it. I know from using Office that the Ribbon menu, once users "get" it, is better and more efficient.  Additionally, these changes to the ribbon are not aimed at those users who are currently in an uproar.  After all, it's going to be the power-users who read up on changes to Windows Explorer months before the associated OS is actually released.  

The thing is, is that this is a big change. Worse, it's a change which feels like feature creep and/or command bloat.  I mean, look at the menu, it's taking up more room, hundreds of commands are now exposed, and it's all shoved into a new interface.  This, when MS admits that ~82% of all commands used are the 10 listed above. 

Ultimately, I think a lot of the backwash is due to the smooth UI which was the first taste of Windows 8 in the video released a few months ago.  Power users and early adopters saw that new desktop experience, and started drooling, ignoring the fact that the desktop has little to nothing to do with the underlying file system or the explorer used to access it. 

Could MS have made this "sexier" or "sleeker." Sure. They could have taken Apple's OSX approach, and given dedicated applications to specific types of tasks.

But, I, as a developer, would be pissed at that. I use the file explorer extensively. I have to move artifacts back and forth between servers and even development-sandboxes on my own machine. I rename things. I delete things, I toss them from one folder to another to my desktop.

But that's at work (or while I'm working).  At home (or while I'm playing), I rarely, if ever, explore the actual depths of explorer; I'll access a few folders where I have media stored, and access a few applications, most of which are stashed in a DOCK application, and on very rare occasions, I'll move media either to my NAS or burn it to disc (or bring it back over to my machine for consumption).

Truthfully, I think this is something of a non-issue. IMO, it's only causing a row right now because of how it compares to the iPad and other tablet devices when Windows 8's first preview in the wild was based around a tablet interface (and more specifically, the promise which that preview held for tablet interfaces).

Sure, it may mean a bit more training time for new users (some of whom may still be coming from Windows XP). Sure, it may be a bit more confusing those first few weeks after adoption. Sure, it may not be as  sleek and stylish as the Mango interface.

But, as long as it does what I need, and stays out of my way while it does it, then I can live with all of it.

Friday, August 26, 2011

Hulu on the TouchPad

The tablet PC that
Hulu doesn't like
I purchased the TouchPad in its recent fire-sale, and overall, I'm quite happy with the device. I'm still considering placing Android on it once the various hackers out there get the process down to a science, but that's neither here nor there.

My issue that I'm a tad annoyed with is the fact that Hulu refuses to run on this machine.

So, of course, I immediately asked Hulu support why this was, and Hulu's response was:
Thanks for writing in about Hulu on the HP TouchPad. Unfortunately, due to the contractual agreements with our content providers, we have to have a certain set of agreements set up with a device manufacturer before we can provide support for that device. We have no partnership with HP at this time, so we can't currently stream through their device.
We get a lot of requests for webOS support, so it's definitely something that's on our radar. Though we can't support it right now, we are continually evaluating new technologies, and will be adding more devices based on user demand. Stay tuned!

We have plans to bring Hulu Plus to as many devices as possible. For up to date info on which devices are currently supported or have been announced, you can go to ( ). On this page, you can even sign up to be notified of availability on upcoming devices.
Which confused me a bit more. After all, a tablet is nothing more but a small-form factor PC. Why does the manufacturer of my PC matter when I try to access a website? It's not like the fact that I'm running a DELL matters when I access Hulu. I mean, does Hulu not work when people build their own gaming rigs?

So, I of course, pointed this logical fallacy out.

And got this... slightly condescending reply:
Thanks for getting back to us. The TouchPad is considered a device and as such to play our content on it requires certain licenses and approval of the owners of the content on our site. I know it's frustrating to have to pay for services you believe should be free. I'd love to speak to the issue.

When we launched the free service, our rights were limited to streaming on the PC. Unfortunately, mobile and TV devices were not included in these contracts, and in order to add them, we've had to do so under a paid service, Hulu Plus. This is for many reasons, including the high price of these types of streaming rights and because many shows required they be part of a paid service to be on these types of devices.

Once again, I'm sorry for the inconvenience. Please let me know if you have any further questions. Happy viewing!

There's two things that really got my goat here.

The first, the implication that a PC is not a PC. I'm a big fan of clarity of thought and word. I know the definition of a PC--it's a rather large part of my job after all.  For those of you who are wondering, a PC is a computer designed for use by a single person. I wouldn't expect to be able to stream Hulu to a SERVER since it's designed for use by multiple people, but tablets are designed for use by one person.

My OTHER Tablet that Hulu likes
Additionally, I KNOW that tablet devices can have Hulu streamed to it. I know this, because I also own a Viliv S5 tablet. And the Viliv device is a smaller form factor (though a thicker, and heavier machine) than the TouchPad. So it's not the size of the device--it's not the fact that it's a tablet that makes a difference to Hulu.

No, it's the operating system. It's the fact that the browser (which runs Flash) on the TouchPad tells Hulu that it's not a Windows device. In fact, I can prove this by using Fiddler to strip out my user-agent string from my HTTP Headers--let's just say that doing so, makes Hulu not play nice with Windows.

The second is that line about "have to pay for services you believe should be free." That makes a rather large assumption that I'm complaining about the cost of Hulu's services--which is not the case. Additionally, even Hulu's "free" service isn't free. There's commercials attached to the video--which is the traditional way that video services have always been paid for (just sit down in front of your television for proof of this).

And even more important in this regard, is the fact that if I was merely after free content, that can be had. It's easy to torrent a TV show--and a torrented TV show does not have commercials, and are typically of better quality than the flash-based video which Hulu streams out.

Regardless, this definitely leaves me in a mood to NEVER buy the Hulu service. After all, if they can't stream the standard service to my small form-factor PC which runs a linux-based OS and Flash, then why on earth would I assume that they could stream the HuluPlus version to my small form-factor PC which runs a linux-based OS and Flash?

And of course I asked them... which leaves me wondering if I'll get an answer...

Thursday, August 25, 2011

Red Gate's SQL Compare

I have a new tool in my toolbox of things I get to play--while being paid to do so.  This one is the SQL Compare utility from Red Gate. 

This tool is designed to allow a user to compare--and update-- the schema's between two disparate databases.

Which all I can say in regards to this is a hearty YAY!

After all, consider the project upon which I'm currently working, which is our CloudBuster Web Framework system. This project is basically the love-child between a standard .NET CRM utility and SharePoint.   It's a massive undertaking in Visual Studio, with roughly 20 discrete projects hiding within it, all as optimized for genericacy and ease of updating and extending as I can make them.

Anyways, we have roughly 25 clients on this framework so far. Some using it as internal intranets (the Sharepoint market space) and others using it as a CRM tool (the DotNetNuke/Joomla market space).

And this is while I'm performing active maintenance and development making the system better and more robust, and have more functions and all that sort of good jazz.

But, the point here is that that's roughly 25 databases that need to be synchronized every time I make a change to my development database or add in a new functionality.  And while the code base works fine for either database, it's a matter of allowing new functionality, even as older code gets refactored into newer and better paradigms that drive this need to occasionally synchronize database schemas.  Previously, this involved scripting out all the changes I could find between my development database and one of the in-production ones, and then running that script against every database, praying that a) I caught all the changes and b) none of these missing changes were breaking.  This meant visually comparing every database structure in every client database.  A process that took roughly 6 hours when I had to perform it against 15 databases back in May.

Well, I got SQL Compare, and was able to run it against all 25 databases over the course of 2 hours.  Two hours which included time to run backups of every database, as well as checking each client's application both before and after the update.

Fun, fun fun!

Thursday, August 18, 2011

Row Not Found...

We (as in the company I work for) have a product which we've named Bounce. Bounce is, as one might guess from the name, a product which reboots servers. It's a solution-in-a-box type thing, that comes in a 1u chassis for server racks.

This product is actual in production at a number of sites, including ours. Additionally, our site is the most complex of the various configurations out there, just because we have a couple of different production networks, our corporate networks, and a "development" network where those servers that use developers torture reside.

Regardless, there was a bug in Bounce, where it would randomly report a failure in one of the servers. Well, this little bug had been bothering the Network guys for a bit, and they finally brought it to my attention as well as the boss' attention--which resulted in some time scheduled to work on the issues, as well as add a few additional enhancements that had been hanging out on the drawing board for a while.

So, I started work, and tested the bounce group that the issue usually popped up in, and found, absolutely nothing.  Everything worked perfectly.

So, I chalked it up to gremlins, and implemented the system additions.

Well, I was doing some final system tests, when, lo and behold, the issue cropped up. I opened up the event log, and saw it somewhat flooded with the same message over, and over again, the only difference between those messages were the fact that it was being generated for every server configured in the system.

That error message read: row could not be found or updated.

Talk about useful.

So, I went to Google, and found some discussions that blamed one of two things:

  1. Concurrency issues generated by the "no count" flag being set on the SQL Server's default connection options
  2. A difference between the DBML definition of the table, and the actual underlying table
So, I checked both issues. I mean it's only a matter of moments to open up SQL Management Studio and ensure that no count wasn't checked, and only a bit harder to just flat out delete all the tables and views on the DBML and re-drag them into the designer.  Sadly, neither solution worked.

Since I was certain that the DBML looked just like the underlying tables (having just dragged them over) I looked more closely into the concurrency issues that folks are reporting.

And I realized that this isn't a SQL concurrency issue, basically, it's not a race condition where two requests are both trying to modify the table at the same time.  This is one of those other definitions for the word concurrency, specifically things being in accordance or agreement.

The below is basically what was happening:
Get List of Devices in BOUNCE QUEUE For Each Device in List
get ComputerDetails as LINQ object
Perform Bounce
Perform System Checks Function on ComputerObject
Update properties of ComputerDetails
Submit Changes on the DBML
What is important is the bolded line there.  That Perform System Checks Function was updating the COMPUTER DETAILS table via an EXECUTECOMMAND call on the DBML object. 

Basically, it was modifying the underlying data table, without updating the CompterDetails LINQ object.

This was fine until I actually updated the ComputerDetails LINQ object and then submitted it back to the database.  When I did that, the system performed a concurrency check (in the "in agreement" definition) against the actual row using those properties that were not being updated.  Since they had been modified elsewhere, outside of the normal LINQ-to-SQL paradigm, LINQ was unable to find the row--or at least upon finding it decided that "no, this wasn't really the row I was looking for."

This means, that it happily spat out a "Row not found" error.

I immediately thought up two possible solutions for what was happening, the first was to re-work the code to use the LINQ object in all those places where in-line SQL was being used.  The second was to re-work the one place I was submitting the LINQ object to use in-line SQL.   Being a lazy programmer, I happily took the latter option.

Lo an behold, my event logs are clean, the error has stopped presenting itself and all is happy and right with the world.  At least until the next bug.

Monday, May 30, 2011

Partitioning .NET code… has an article up currently on Partitioning .NET code, an issue which I'm intrigued with, and had recently had to actually do.

My thing was, that the company I work for, has a rather large framework system in place for a lot of our development efforts, especially where intranets are concerned. I was adding functionality to this system, and started having troubles keeping track of my efforts.

I mean, there were a number of aspects that this system had going on:

  • HTTP Modules (entirely code based, and defined in the WEB.Config as opposed to ashx-style handlers)
  • Worker classes
  • Security
  • Data Layer
  • Sandboxes for the home page
  • Interfaces project for custom additions to the system

This was in addition to the base web-forms (things like the home page, the master page, content pages, administration pages, calendar pages, etc.), and the add-ins for specific clients (recently a client wanted a new sandbox which displayed a random image from a selected list of images that had been uploaded into the application).

Now, originally, I had two projects. The first was the framework, and the second was the interface project. This "worked" but it was hard to grok in its entirety.  This was mainly because there were just so many moving parts to the thing.

So, I did what I felt was the most reasonably solution, I yanked it apart into discrete bits of functional DLLs. Then, I took those DLLs and broke them into logical Namespaces. For example, I have a number of HTTP modules in the system. Two of the categories that they can be broken into is serving images or serving other files from the database.  To handle this, I added a FILE namespace and an IMAGE namespace into the .HttpModules namespace of my library project.

What all this means is that I have a number of library projects for the core functionality that hides in the system, and those DLLs are then broken up into Namespaces around specifics types of those core functionalities.

This works, since I have two ways to extend this project. I can either use stand-alone DLLs, that are instantiated at run-time, or I can use a second project which uses ASCX files that again, I can load in at run-time.

But, the important thing from my point of view, is that I can easily determine where a code object belongs in my core-package. This makes new development, bug fixes and testing much easier. And, I'm also not loading a bunch of classes into the memory space unless absolutely necessarily.

The only thing with how dynamic this whole system ultimately is, is that it's actually hard to test from within the VS IDE. I have to deploy the projects to a test web-server in order to fully access all of its disparate parts. But, that's more of the fact that objects are being late-bound into the application, rather than any thing to do with the partitions into separate DLLs.

Monday, February 21, 2011

Data Cubing for fun and profit!

Amusingly enough, I'm actually am having fun with the data cubing stuff.

After all, I'm not the one writing the query that generates the data, but rather the UI designer/implementer for this.  A role which I'm finding myself fulfilling more and more often, as my boss is a much more powerful SQL GURU than I, whereas I'm much more familiar with the UI side of things, especially dynamic UIs on thin clients (web-based).

But, it's still grand fun.

And another that's still grand fun is my 'play project' eComic. Which the last version (which is nearing a year in age now) has over 10,000 downloads (running an average of 317 downloads per week these past six months, and nearly 1,000 page views in the same time frame) and 11,393 unique visitors. Which means that for roughly every 1.3 visitors, that visitor downloads eComic.

But even more importantly, and something that makes me feel good on a personal and professional level, is that eComic was used as a referenced answer in a StackOverflow question.  Additionally, it wasn't ME that used it as such, and I'm not even familiar with the user that did so. It was one of those anonymous things, that make it so powerful.

All that said, I have retired one of my pet projects (which had been resting in limbo, and being obscenely under-utilized), which was The Expanded Universe (a ficlets site). Which is due to the fact that there just was not enough traffic to the site to warrant its continued support.

That said, I do have two additional pet projects which I'm considering, one is something I'm keeping under my belt for the moment, while the other is a web-based version of eComic. That will probably be monetized (whereas eComic itself is not) in that hosting, bandwidth and disk space is expensive--especially since we're dealing with image files.

Oh well, that's enough for tonight, after all, I have coding I have to get done.

Blog Widget by LinkWithin