Archive for March, 2009

SB 1522 From IL 96th Congress Should Be Passed

SB1522 (http://tinyurl.com/c2b4yx) in Illinois sounds promising for IL based startups. Call your IL state senator-http://tinyurl.com/dgpmzw.

What does the bill do? I’m just going to grab the text from the May Report. I’m skipping adding links because any Illinois resident who reads my blog doesn’t need more links to click on. Not this time!

A capped ($10 million annually) grant-matching program.  So, for example, if a qualified Illinois tech startup obtains a $100,000 federal Small Business Innovative Research (SBIR) grant, the state of Illinois will match that grant. A capped ($15 million annually) investment tax credit for state-registered and qualified early stage investors.  Under appropriate circumstances, such an investor making an early-stage investment in a technology startup would receive a capped credit against his/her Illinois tax bill.

Why we need it:

Illinois’ failure to translate its world class research capabilities into a vibrant startup community has been documented in depressing detail by numerous studies over the last 10 years. Technology firms created here frequently move to other states-including those in the nearby Midwest-recruited away by SB 1522- like programs.  As a result, brains, talent, and Illinois-taxpayer financed technical discoveries leave our state.

What you can do:

  1. Find your state Senator: http://tinyurl.com/dgpmzw.
  2. Call your Senator and ask him/her to support SB 1522. Stress that:
    • Illinois needs the jobs and tax base that tech firms provide.
    • This bill provides much needed stimulus to our economy, because small companies create 60-80% of new jobs and each tech job creates 3-6 jobs indirectly.
    • The funding is so tiny against the total state budget that it will have little if any impact on the deficit/tax picture.

Please act now! State legislators say that if 3-5 people call them on a bill, they really take notice.  You can make sure these much-needed programs are available to our Illinois entrepreneurs.  Thanks for your help! Again, the URL to locate your state senator is http://tinyurl.com/dgpmzw.

Leave a comment

Moving from Azure Desktop—>Cloud Table Storage Issue

Here’s a small gotcha that I didn’t see covered via the normal Google coverage. So, I’m adding the information and the solution so that I can find the answer when I need it again. I’m sharing via my blog to help you out too. If this helps you, click on a link and send some change my way:)

Symptom: You follow the rules and ran “Create Test Storage Tables” from Visual Studio on your dev box. All your testing locally seems to work. When you deploy, you see an error like this (leaving in lots of Google discovery goodness in here. If this saves your bacon, send me a thank you!):


<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"&gt;
<code>TableNotFound</code>
<message xml:lang="en-US">The table specified does not exist.</message>
</error>

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"&gt;
<code>TableNotFound</code>
<message xml:lang="en-US">The table specified does not exist.</message>
</error>
Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:

[DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
  <code>TableNotFound</code>
  <message xml:lang="en-US">The table specified does not exist.</message>
</error>
]
   System.Data.Services.Client.<HandleBatchResponse>d__1d.MoveNext() +1294

[DataServiceRequestException: An error occurred while processing this request.]
   System.Data.Services.Client.SaveAsyncResult.HandleBatchResponse() +391100
   System.Data.Services.Client.DataServiceContext.SaveChanges(SaveChangesOptions options) +177
   Microsoft.Samples.ServiceHosting.StorageClient.<>c__DisplayClass1.<SaveChangesWithRetries>b__0() in C:UsersScott SeelyDocumentsVisual Studio 2008ProjectsAzureSamplesStorageClientLibTableStorage.cs:1227
   Microsoft.Samples.ServiceHosting.StorageClient.RetryPolicies.NoRetry(Action action) in C:UsersScott SeelyDocumentsVisual Studio 2008ProjectsAzureSamplesStorageClientLibBlobStorage.cs:220
   Microsoft.Samples.ServiceHosting.StorageClient.TableStorageDataServiceContext.SaveChangesWithRetries() in C:UsersScott SeelyDocumentsVisual Studio 2008ProjectsAzureSamplesStorageClientLibTableStorage.cs:1215


The critical part to fixing this was to make sure that my tables were actually initialized prior to running any queries. The class that holds my IQueryable so that the table storage actually works now has an Init() function. The class itself now reads:

    public class AzureMembershipDataContext : TableStorageDataServiceContext

    {

        public AzureMembershipDataContext() :

            base(StorageAccountInfo.GetDefaultTableStorageAccountFromConfiguration())

        {

            Init();

        }

 

        private const string TableName = "AzureUserTable";

        public IQueryable<AzureUserData> AzureUserTable

        {

            get

            {

                return CreateQuery<AzureUserData>(TableName);

            }

        }

 

        private static bool _initalized = false;

        static void Init()

        {

            if (!_initalized)

            {

                StorageAccountInfo storageAccountInfo =

                    StorageAccountInfo.GetDefaultTableStorageAccountFromConfiguration();

                TableStorage.CreateTablesFromModel(typeof(AzureMembershipDataContext), storageAccountInfo);

                _initalized = true;

            }

        }

 

        public void Add(AzureUserData data)

        {

            base.AddObject(TableName, data);

        }

    }

And yeah-that code to actually CreateTablesFromModel in Init() is super important. Without it, you can’t do anything. Make sure the model is created from the type containing your IQueryable-going off of the table type doesn’t work with the API.

Leave a comment

AJAX and WCF

Most of the information in this post applies to .NET 3.5. Code examples are in F# because I’m teaching myself F#. If you want C#, wait until I post the same material on Azure or buy my Effective REST Services via .NET book.

I’m currently working on the finishing touches of an Amazon Web Services version of my PhotoWeb application. Because EC2 supports Windows, I’m building the whole thing for IIS, using F# as the programming language. One of the features of this application is that it uses AJAX to update metadata about the images on the screen. To handle AJAX, one commonly uses either ASMX or WCF. In .NET 3.5, the WCF team added some functionality to make AJAX a no-brainer via a ServiceHostFactory-derived class that just does the right thing and a few new attributes. This means no updates to web.config in order to get your service accessible to the world.

I am going to go through the mechanics of setting things up without diving into how the application interacts with storage-I’ll write those posts later.

The service interacts with a type: FSWebApp.ImageItem. ImageItem has the following structure:

[<DataContract>]

type ImageItem() =

    [<DefaultValue>]

    [<DataMember>]

    val mutable ImageUrl: System.String

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable ImageId: System.Guid

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable Description: System.String

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable Caption: System.String

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable PublicImage: System.Boolean

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable UserName: System.String

The [<DefaultValue>] is an F#-ism stating that the mutable value should be initialized to the proper zero value for the type. This is the same as initializing each item in C# to default(type) which yields a value in the set [0|null|false].

With a data type to pass around, we can define some methods on that type. I know this is WCF and many of the early examples show defining an interface and then a concrete type. We will skip the interface here. It is OK to declare the ServiceContract and OperationContract information directly on the type since we won’t be writing any code to create a System.ServiceModel.Channels.Channel to that new type. (WCF requires an interface at the ServiceModel layer when creating your proxies. One is always present in the generated code. Any classes you use are helpers that eventually call into an interface.)

Simple operations are

  • Delete an image
  • Update an image metadata
  • Get all image metadata for the current user

The WCF, do nothing implementation is then:

[<ServiceContract>]

type PhotoWebService() =

 

    [<WebInvoke(Method = "POST")>]

    [<OperationContract>]

    member this.DeleteImage(imageId: System.String) =

        ()

 

    [<WebInvoke(Method = "POST")>]

    [<OperationContract>]

    member this.GetImagesForUser() =

        [|(new ImageItem()); (new ImageItem())|]

 

    [<WebInvoke(Method = "POST")>]

    [<OperationContract>]

    member this.UpdateImage(item: ImageItem, imageId: System.String) =

        ()

You may have noticed that all the parameters have explicit types. This is done because the F# engine cannot infer the types without usage of the methods. Because the caller will be external to the assembly, we have to tell the compiler what types to expect for the exposed methods.

Finally, we need to modify the .svc file so that the write ServiceHostFactory is used (and we can stay out of config!).

<%@ ServiceHost Language="F#" Service="FSWebApp.PhotoWebService" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>

With this in place, I added a ScriptManager to the page I was working with and got a beautiful JS file returned to me that knows how to interact with the PhotoWebService endpoint.

<asp:ScriptManagerProxy ID="scriptManagerProxy"
runat="server">

    <Services>

        <asp:ServiceReference Path="~/PhotoWebService.svc" />

    </Services>

</asp:ScriptManagerProxy>

Looking in a tool like Firefox’s Firebug, I see the script is being created for me, meaning that everything is wired up correctly!

For those of you who want a glimpse at what gets generated, here you go!

    1 Type.registerNamespace(‘tempuri.org’);

    2 tempuri.org.PhotoWebService = function() {

    3     tempuri.org.PhotoWebService.initializeBase(this);

    4     this._timeout = 0;

    5     this._userContext = null;

    6     this._succeeded = null;

    7     this._failed = null;

    8 }

    9 tempuri.org.PhotoWebService.prototype = {

   10     _get_path: function() {

   11         var p = this.get_path();

   12         if (p) return p;

   13         else return tempuri.org.PhotoWebService._staticInstance.get_path();

   14     },

   15     DeleteImage: function(imageId, succeededCallback, failedCallback, userContext) {

   16         return this._invoke(this._get_path(), ‘DeleteImage’, false, { imageId: imageId }, succeededCallback, failedCallback, userContext);

   17     },

   18     GetImagesForUser: function(succeededCallback, failedCallback, userContext) {

   19         return this._invoke(this._get_path(), ‘GetImagesForUser’, false, {}, succeededCallback, failedCallback, userContext);

   20     },

   21     UpdateImage: function(item, imageId, succeededCallback, failedCallback, userContext) {

   22         return this._invoke(this._get_path(), ‘UpdateImage’, false, { item: item, imageId: imageId }, succeededCallback, failedCallback, userContext);

   23     }

   24 }

   25 tempuri.org.PhotoWebService.registerClass(‘tempuri.org.PhotoWebService’, Sys.Net.WebServiceProxy);

   26 tempuri.org.PhotoWebService._staticInstance = new tempuri.org.PhotoWebService();

   27 tempuri.org.PhotoWebService.set_path = function(value) { tempuri.org.PhotoWebService._staticInstance.set_path(value); }

   28 tempuri.org.PhotoWebService.get_path = function() { return tempuri.org.PhotoWebService._staticInstance.get_path(); }

   29 tempuri.org.PhotoWebService.set_timeout = function(value) { tempuri.org.PhotoWebService._staticInstance.set_timeout(value); }

   30 tempuri.org.PhotoWebService.get_timeout = function() { return tempuri.org.PhotoWebService._staticInstance.get_timeout(); }

   31 tempuri.org.PhotoWebService.set_defaultUserContext = function(value) { tempuri.org.PhotoWebService._staticInstance.set_defaultUserContext(value); }

   32 tempuri.org.PhotoWebService.get_defaultUserContext = function() { return tempuri.org.PhotoWebService._staticInstance.get_defaultUserContext(); }

   33 tempuri.org.PhotoWebService.set_defaultSucceededCallback = function(value) { tempuri.org.PhotoWebService._staticInstance.set_defaultSucceededCallback(value); }

   34 tempuri.org.PhotoWebService.get_defaultSucceededCallback = function() { return tempuri.org.PhotoWebService._staticInstance.get_defaultSucceededCallback(); }

   35 tempuri.org.PhotoWebService.set_defaultFailedCallback = function(value) { tempuri.org.PhotoWebService._staticInstance.set_defaultFailedCallback(value); }

   36 tempuri.org.PhotoWebService.get_defaultFailedCallback = function() { return tempuri.org.PhotoWebService._staticInstance.get_defaultFailedCallback(); }

   37 tempuri.org.PhotoWebService.set_path("/AWS/PhotoWebService.svc");

   38 tempuri.org.PhotoWebService.DeleteImage = function(imageId, onSuccess, onFailed, userContext) { tempuri.org.PhotoWebService._staticInstance.DeleteImage(imageId, onSuccess, onFailed, userContext); }

   39 tempuri.org.PhotoWebService.GetImagesForUser = function(onSuccess, onFailed, userCon
text) { tempuri.org.PhotoWebService._staticInstance.GetImagesForUser(onSuccess, onFailed, userContext); }

   40 tempuri.org.PhotoWebService.UpdateImage = function(item, imageId, onSuccess, onFailed, userContext) { tempuri.org.PhotoWebService._staticInstance.UpdateImage(item, imageId, onSuccess, onFailed, userContext); }

   41 var gtc = Sys.Net.WebServiceProxy._generateTypedConstructor;

   42 Type.registerNamespace(‘FSWebApp’);

   43 if (typeof (FSWebApp.ImageItem) === ‘undefined’) {

   44     FSWebApp.ImageItem = gtc("ImageItem:http://schemas.datacontract.org/2004/07/FSWebApp&quot;);

   45     FSWebApp.ImageItem.registerClass(‘FSWebApp.ImageItem’);

   46 }

Leave a comment

Software Craftsmanship—Separating the Wheat from the Chaff

I don’t want to talk about separating out good developers from bad ones. This happens naturally enough if you have a good interview and probation process. Be honest, see things as they are, and you will identify genuinely good developers. Instead, I want to talk about Software Craftsmanship as a movement. I’ve been lurking within the Software Craftsmanship community for a while and I am disgusted by much of what I am seeing from the talkers. Every group like this has at least two factions:

  1. Folks who want to proselytize. This group derives their self worth from speaking on the topic and explaining why anyone not following their practices is an armpit sniffing moron.
  2. Folks who take the good ideas and use them. This camp is too busy doing a better job today than they did yesterday to spend time converting others.

Software Craftsmanship as a whole has some fairly good ideas:

  • Test Driven Development: Write tests before you write the actual code. This practice has some great benefits. Code winds up being very usable because the code is written with a focus on the consumer of the code instead of the producer. The code tends to be fluent and have simpler parameter lists.
  • Always be delivering something: Part of scrum, a development team performs best when their time horizon is always 1-3 weeks out. This also allows for a tighter feedback loop since the consumer always has something they can use. As the requirements and reality converge, less time is wasted on things that are wrong.
  • Continuous Integration: You should always have a buildable system. A broken build is a show stopper for the person who broke things. Even better, have a staged build system where the developer checkin is built on a ‘clean’ machine, has all tests run. If the build fails or any tests fail, the checkin can’t go through. This keeps the build clean all the time. (We did this at Microsoft for WCF. It was great being able to always get a clean build at any time of the day or night.)
  • Monitor code coverage: Look what blocks of code are NOT covered. By analyzing what parts aren’t tested, you can learn where you are missing tests. After all, we are going to miss things when practicing TDD. Code coverage lets us know when we skipped step.
  • YAGNI: You aren’t going to need it. This stops us from worrying about every imaginable use case for the feature. Satisfy the customers you know about, design for ease of change via solid tests. When those use cases appear (later), you can re-run the tests to make sure adding the use case left the system in a stable state.
  • When checking in a bug fix, check in a new test that verifies the bug was fixed. This step prevents regressions, and regressing is always more expensive than writing a test.
  • Inversion of Control/Mocks: Programming to interfaces allows for looser coupling between systems. Looser coupling allows developers to work on dependent features simultaneously. Once the features are ready, the two can use a day or two of pair programming to wire the features together.

There are more items. My point is-these are all good ideas. These ideas are NOT a methodology. I used all of these features at Microsoft from 2000-6. I’m sure that Microsoft was doing this long before I showed up. When you build software, you need Build Verification Tests (aka TDD). You should have mechanisms in place that guarantee that the build is always valid. You should be able to design features in parallel-mechanisms used to design features in parallel also makes those features easier to test and diagnose!

This is where I have my issues with the Software Craftsmanship community. You aren’t superior to anyone. Your ideas aren’t new-they are OLD. Mechanical engineers and electrical engineers have known these tricks for a long time. So have carpenters, electricians, and any other craftsman. I have no issues with educating developers on how to raise the level of quality. Stop writing manifestos and just teach your fellow programmers how to do a good job. Explain why you do what you do so that we may learn what practices work. Drop the attitude of superiority though. I can’t stand to listen to you. Your rhetoric blocks the real message-that these practices actually allow people to get more done with fewer errors. That they allow for creativity of the group to flourish and for cooperation to work better.

I think there is a lot of potential in Software Craftsmanship. I think a better focus on the practices and away from the ‘movement’ aspects will win more often. The vocal members of the group like feeling superior instead of being helpful. I can’t get behind that kind of thing.

Leave a comment

Layered design and normalized behavior vs. "The other way"

One of the LCNUG members, Kurt Schroeder, has volunteered to present at this month’s regular meeting. We’re a typical INETA group: food/tech/discussion during the 2 hour meeting. Kurt’s talk is definitely technical, but his talk focuses on the lessons learned while constructing a system quickly and then rebuilding it to make the system better (easier to maintain, enhance, and comprehend).

Kurt will be spending a lot of time in code, showing the original implementation and showing how it morphed over time. We all have bad code in our systems that we want to make better. See how Kurt evolved the ever so sexy sounding “stock market system based on point and figure charting.” Trust me-it’ll be interesting.

Join us:

Sign up at: http://www.eventbrite.com/event/300902006

When: March 26, 2009 6:30 PM – 8:30 PM

Where: College of Lake County: Technology Building in room T326-328.

Directions

Campus map

Leave a comment

Seq.find—Neat!

As I continue to build more F# code, I keep finding interesting usage of the language and its features. Today, I’m going to talk about Seq.find. Seq.find loops through a sequence (an IEnumerable in .NET speak) looking for some item that satisfies a function that returns a boolean. As soon as the function returns true, the iteration stops and the found item is returned. Maybe it will just be easier to show what the code looks like:

#light

 

let thelist = [|("joe", 1); ("jim", 3); ("scott", 6); ("kenn", 29)|]

let scottsNumber = thelist |> Seq.find(fun (a, b) -> a = "scott") |> (fun (a,b) -> b)

printfn "%d" scottsNumber

The code has an array of tuples. It then pipes that list into Seq.find. Because our tuple has two values, the function will also take a two value tuple. The result of the evaluation will be the first tuple that has the first value set to scott. The final function takes the second value and returns it. This final value is then assigned to scottsNumber and written to the standard output. What would this same code look like in C#? Well, if we want to stick to keeping the intent identical, then the evaluation would be:

var theList = new[]

                  {

                      new {name="joe", number=1},

                      new {name="jim", number=3},

                      new {name="scott", number=6},

                      new {name="kenn", number=29},

                  };

int foundItem = -1;

foreach (var listItem in theList)

{

    if (listItem.name == "scott")

    {

        foundItem = listItem.number;

        break;

    }

}

Console.WriteLine(foundItem);

This is as close as I can get to the same pithiness of the F# code while preserving the notion that we bail out on the first item. Given evaluation semantics of LINQ, the following is even closer:

var theList = new[]

                  {

                      new {name="joe", number=1},

                      new {name="jim", number=3},

                      new {name="scott", number=6},

                      new {name="kenn", number=29},

                  };

var scottsNumber = (from listItem in theList where listItem.name == "scott" select listItem.number).First();

Console.WriteLine(scottsNumber);

Personally, I’m finding that the F# part is easier to read, though the LINQ syntax is also becoming more apparent to me as I write more F#. I guess you have to get your head in the right mode in order to see how this stuff works.

Leave a comment

Something Interesting in F#

I’m busy learning how to use SimpleDB. Since I can do the experimentation in F#, I am. Right now, I’m using the .NET SimpleDB client that Amazon has here. I had written the following code, which I only want to run when I discover that my SimpleDB domain does NOT exist:

    let createDomain =

            let createParam = new Model.CreateDomainRequest()

            createParam.DomainName <- domainName

            let response = SimpleDBClient.CreateDomain(createParam)

            ()

A downside to this code is that it gets evaluated EVERY time. Creating a domain, even an existing one, takes several seconds from sending the request to receiving the response. Ideally, creation would only run as needed. What I really want is function type behavior for createDomain. That is, the actual work shouldn’t happen until I need it to. Then, I remembered that functions are first class citizens in F#. I transformed my code to this:

        let createDomain =

            (fun () ->

                let createParam = new Model.CreateDomainRequest()

                createParam.DomainName <- domainName

                let response = SimpleDBClient.CreateDomain(createParam)

                ())

Now, when I want to create a domain, I write createDomain() to get the code to execute. Without that, the call to SimpleDB doesn’t happen.

And yes, I’m aware that many folks who are using F# know this stuff cold. I’m blogging these nuggets now so that I can find them later. That, and it lets all the other newbs not feel so bad about missing the simple stuff;)

Leave a comment

Amazon SimpleDB Domain Names

I was busy working on the Amazon Web Services version of my Photo Sharing application (the App Engine version appeared in February 2008). I was able to create a domain with the name friseton.com. I then wrote a simple query against this domain:

select * from friseton.com where email=’dude@example.com’

SimpleDB responded and told me that the select query had an invalid format. On a wild guess, I thought it might be the dot (.) in friseton.com. Sure enough, it was. I deleted the friseton.com domain, added friseton_com, and the query started working.

select * from friseton_com where email=’dude@example.com’

I don’t know if there is any syntax around allowing the dot in the domain name, but I found this, it works, and I’m moving on with the demo. I’ll learn the ins and outs as I go.

1 Comment

Amazon Announces Reserved Instances For EC2

Amazon just announced a reserved instances plan for EC2. This new option significantly reduces costs for Linux/UNIX users. This option is not available for Windows users as of this time. The way it appears to work is this:

  • User buys  block of hours that is good for a 1 or 3 year term.
  • User consumes hours and has this usage debited from the prepaid block of time.
  • If the user fails to consume their hours within the 1 or 3 year term, the money is gone.

The 1 year blocks sell 10833 hours. The 3 year blocks represent 16667 hours. For comparison, a week has 168 hours, a year has 8760 or 8784 hours (leap year dependent). Here’s a cost comparison of the offerings:

 

1 yr Block of hours

Pay as you go

3 yr Block of hours

Pay as you go

Standard/Small

$325.00

$1,083.33

$500.00

$1,666.67

Standard/Medium

$1,300.00

$4,333.33

$2,000.00

$13,333.34

Standard/Large

$2,600.00

$8,666.66

$4,000.00

$13,333.34

High CPU/Medium

$650.00

$3,250.00

$1,000.00

$5,000.00

High CPU/Large

$2,600.00

$13,000.00

$4,000.00

$20,000.00

 

A couple of things jump out at me. First, I wonder if anyone would ever use the Standard/Large instance when the High CPU/Large was available for the same price. The only use case I see is where someone has a Standard/Large humming along perfectly and they don’t want to incur the costs for testing the switch. It seems like it will be cost effective to run a server all year round at these new prices– depending of course on your storage needs.

With these reserved instances, there are a number of restrictions. Keep in mind that Amazon is able to reduce the prices because you are agreeing to lock yourself into a usage mode. In order for you to win, they need to win too! What are the restrictions? From the FAQ, I see these:

  1. You can purchase 1 to 20 instances through website. Instances 21+ require special permission (though it doesn’t look onerous).
  2. Reserved instances live in one and only one availability zone. You are locked into that one for the contract duration. This shouldn’t be a huge restriction since, even when you purchase ahead of time, you come out ahead after consuming 20% of your hours in a high CPU contract and after 30% of your hours in the standard plans
  3. Once you purchase an instance type, you can’t convert things to a different instance type later. You stick with what you purchased. Again, risk is low since if you made a mistake, you can throw away 69% of your hours and still come out ahead of the pay as you go route.
  4. As mentioned above, if you don’t use it, you lose it. Unconsumed hours disappear after the 1 or 3 year term.

This looks like a good deal. I can’t wait until the comparable Windows plan is rolled out!

Leave a comment

Flying Above the Clouds

I originally presented this talk to the Azure Cloud Computing User Group on February 25, 2009. Thanks to Bryce Calhoun for inviting me to present! The original meeting announcement had this summary:

Scott Seely, Architect at MySpace, will kick off the meeting with a 20-30 minute overview of the top three cloud computing offerings available today: Google App Engine, Amazon EC3 and Azure Services. His discussion will be primarily focused on a compare/contrast of the functionality and features inherent to each platform.

Enjoy!

var videofile = “Flying Above the Clouds.wmv”;

Leave a comment