Monday, May 18, 2009

Preventing Application Scope Conflicts with Dynamic Application Names

Today I was setting up a client's application to have separate Production and Staging environments; the team is already running our own local Development environments, but we didn't yet have a non-live location to show off current changes without dragging people over to one of the dev boxes.

Due to various restrictions, the staging environment needed to be on the same box (Running CF8 Standard) as the production app.  After setting up a separate database, dsn, etc, I ran into one more issue: Application variables.

CF stores application data based on the name of the application.  I now had two copies of the site on the same box, both using the same application name.  When I configured Staging to use the new DSN (stored via application.dsn), Production's DSN was also changed.

I needed to differentiate the applications' names based on their purpose.  I already had an external config file, but it was only being loaded on application start, and I didn't like the idea of running a CFINCLUDE on every page load just to get the "mode" of the current install.  Plus, this config file was written to set application variables... which wouldn't yet exist if I were to make this change, requiring more refactoring.

I thought about using the CGI scope to detect the requested domain name, but this particular client/application makes it likely that URLs could change.

I realized that the only thing I could use to uniquely identify a collection of files was the path to the actual location of those files.  Which led me to:

<cfset this.name = "myapp-#hash( getCurrentTemplatePath() )#">

getCurrentTemplatePath() returns the path to the currently executing CF file, which will always be the Application.cfc.  I then hash() that value to remove odd characters and standardize the length.

The added benefit of this is that EVERY installation of this application will have a unique application name, and it doesn't have to know anything about the other applications or what "mode" it is running under.  This could be helpful as I might need to add another copy of the site to the same server, as a sort of remote-hosted development location for a specific dev.

The other thing I learned today? I can't blog short even for simple topics.

Thursday, May 14, 2009

Retro Game Challenge

It's been a long while since I last bought a video game, so I picked up Retro Game Challenge for the DS today.

It's a collection of fake 80s games, all eerily similar to actual 80s games, with the contrived storyline that you've been sucked into the past and need to complete various challenges in those games in order to return to your own time.

I've only barely started, but already there's plenty of amusing references to early gaming, both obvious and not.

The first game is a Galaga clone, and when you finish the bonus round, it triumphantly tells you that "YOU SHOOTED 28 ASTEROIDS!"  This game is faithful to the retro games, right down to the poor translations.  Color me tickled.

Automated Deployment Crash Course

I've been spending a lot of time this week working out my first automated deployment process.  And, as usual, it's not going to end up being a nice, simple one.  We've got branches, tags, and only one developer actually comfortable with subversion (me).  We've got external developers who, by their own admission, know just enough to be dangerous.  Schema updates.  SFTP connections.  Firewalls.  Heck, there's probably a ninja riding a shark somewhere in this project.

Anyway, I've played with ANT a little in the past, but I had trouble getting it to connect over SFTP and more issues with integrating SVN due to our self-signed certificate on the repo.  Plus, the external developers were already clamoring to "just have FTP access" to the production server, and I figured I was going to have a hard enough time getting them to use SVN, much less ANT.

On the production server, the app's code exists as a working copy checked out from the repo.  This has made deployment simple thus far; connect to server, update working copy.  It also made the current situation more difficult (If someone DOES edit the files on the production server, we risk conflicts on an update, and that will put non-CFML into the conflicted files, probably bringing the whole site down).

The solution I settled on was to use CFEXECUTE to trigger the svn command on the server through a web interface.  I'd tried this before with little luck, but this time I pressed further and managed to solve all my issues.

The first step was grabbing an svn executable.  The app in question is on a Windows server, and we'd been using TortoiseSVN, which doesn't have a (traditional) command line interface.  Enter SlikSVN.

Next came about three hours of experiments getting CFEXECUTE to call the executable, skip interactive questions ( --non-interactive ), accept our self-signed certificate ( --trust-server-cert ), pass in a username and password (even for list and status commands?).

The largest waste of time came while I was getting zero results and couldn't tell why.  I figured SVN was showing an error, but CFEXECUTE wasn't capturing error output.  That's when I stumbled across a neat feature added in 8.0.1: CFEXECUTE's errorVariable attributes.  Works just like "variable", except it catches the error output, not the standard output.  Turns out I was forgetting the /svn/ part of our repo URL.  Oops.

Anyway, We now have a setup where the site is a working copy attached to the trunk and external developers have access to the trunk and the ability to force deployments though a web interface.  Meanwhile, my team will concentrate on adding features in a new branch, and merge to the trunk when we're ready to deploy.

This is still not ideal.  The next step in the process will involve creating a separate staging version of the site for testing, and a new process for deploying to the production site that will utilize tags to keep production as stable as possible.

In other words, my Automated Deployment is currently a manual process.  But it's a good first step.

Still, I was able to come up with a solution that satisfies the needs of the external developers and lets us retain the power of svn.  And also start training people that editing live web sites directly is a Bad Idea.  Honestly, what we've got right now isn't too much different, but we'll solve that once I separate Staging from Production.

Thursday, April 30, 2009

Netbook achieved.

Finally got myself a netbook.  I've been holding out for one of the touchscreen models hoping I could use it to replace my pen-and-paper note-taking process, which tends to result in lots of paper and little organization.  But the waiting finally got to me, and I found a great deal that will help tide me over, letting me be picky about my eventual choice.

I picked up an Eee PC 900, a pre-Atom version, w/ 512MB RAM, 4GB drive and Linux OS.  I must admit, I didn't play with the pre-installed OS for long, but just long enough to discover it was having issues connecting to my WiFi.  I poked around a bit (It has a Voice Command icon that intrigued me, but I never checked it out), but then installed the new Ubuntu Netbook Remix.

The install was simple, but not without odd issues.  In order to boot off the USB drive, I ended up disabling the internal drive though the BIOS, only to determine that I had to switch the USB stick to a second port; for some reason, the Eee didn't want to boot off the right port closer to the front.

Once I was in the "Live" test of Ubuntu, the first impression was disappointment; the mouse was sluggish and the interface was very laggy.  But once I opened Firefox, it was all very snappy.  Turns out there is a known issue with some of the older netbooks, and there's a new kernel that fixes it.  Information is here.  The fix is just new enough that you have to install it manually, but I suspect it'll be in the repos soon enough.

So now I have a working netbook with a well-supported OS and more free space than the factory-installed system (which dedicates half the drive to a restore partition).  Now my only concern is to figure out how best to make use of it; as a mini-laptop, it's a bit too awkward for note-taking, so it'll serve the role it was designed for; portable, quick-access net device.

The one issue I've neglected to mention thus far is that I've got a bad key.  My "I" key wants to be hitjuuuust right to register, and even then it's probably only 50%.  I alerted ASUS to the issue, but I'm just not sure it would be worth the time and money to ship it off for this.  Though it certainly could have chosen a more convenient key to be flaky on.  Vowels are overrated, anyway.

Tuesday, March 3, 2009

Getting Closer...

I've been more productive in one-and-a-half hours this evening than I managed all weekend. I'm not sure whether I like where this is going.

In any case, I am now incredibly close to having my personal server configured the way I want it.  Or at least doing the things I want it to do.  In order to know how I want it configured, I still have to figure out exactly what I've actually done.

Which, thus far, amounts to getting Railo Community running on Resin though Apache,and managing to get the whole thing to work though reboots.  All with a side journey of getting my iptables config to reload itself between reboots.

I've been running Linux as my main desktop for three years now, but my eyes still glaze over whenever I need to dip into the deeper system-level configurations.  Too many flavors of Linux, too many how-tos that just tell you mindless steps to follow.  Once I'm though piecing together what I need from three or four of them, I'm incredibly unikely to know what I just did, much less be able to repeat it on my own.

In short, we need less How-Tos, and more Why-Tos.

Oh, and also, I've installed this blog.