Recommended Listening
Posted by Scott Seely in Uncategorized on July 14, 2010
As developers, we need our knowledge of our platform to have depth as well as breadth. Typically, we gain depth through projects we take on. For example, we might learn MVC by converting a WebForms application to ASP.NET MVC2. Acquisition of depth is almost passive. Once we commit to the big project, we get the depth simply by finishing the job. To acquire breadth, we do other things. Any developer with significant breadth does most of the following: read blogs, read magazines, read books, attend conferences, and listen to podcasts.
In the past few months, I found a hidden gem; I started listening to the Developer Smackdown. I’ll admit, I started listening because my area got a new Developer Evangelist, Clark Sell, and I just wanted to learn more about him. I’ve gone from stalking to the point where I look forward to the semi-weekly podcasts. During the course of an hour, Clark and his co-host, Mark Nichols, introduce you to the things that their guests are passionate about. The guests are almost all from the midwest region, so they are people that I have either met or can meet fairly easily. Clark and Mark are TFS Rangers, so ALM discussions are never far away on any topic. For example, they recently had Brian Hogan from Eau Claire on to talk about HTML5 and related topics (listen here). I increased my breadth fairly quickly by learning about HTML5 and CSS3. I was also able to add a trick to my web development toolbox: develop on FireFox first, then check for compatibility on IE since IE it is easier, in practice, to alter for IE. He also gave some more great advice: don’t worry about pixel perfect pages on IE, FireFox, Safari, etc. because the only people that compare pixel positions are web designers and developers. Everyone else uses only one browser. Just worry that things look right.
I’ve also picked up tricks from Travis Feirtag, Steven Murawski, and others. As an added bonus, Mark and Clark take this show seriously which means they’ve recently bought a ton of professional gear to make the show sound better.
What hidden gem podcasts do you know of in your markets?
Azure ServiceBus now supports Silverlight/Flash!
Posted by Scott Seely in Uncategorized on July 2, 2010
The Azure ServiceBus Access Control Service was updated on July 1, 2010 with a very nice surprise: it has policy files in place for Silverlight and Flash! A week ago, I had written some code that redirected authentication requests and so on so that my Silverlight code could authenticate against the ACS. Today, I saw a post that the service had been updated. So, I went ahead and tried it out:
Silverlight: https://%5Byour service].accesscontrol.windows.net/crossdomain.xml
Flash: https://%5Byour service].accesscontrol.windows.net/clientaccesspolicy.xml
Note that both of these are policies that allow ANYONE to send messages to the ACS.
crossdomain.xml has this in the body:
<cross-domain-policy>
<allow-access-from domain="*" secure="true" />
<allow-access-from domain="*" secure="false" />
<allow-http-request-headers-from domain="*" headers="*" secure="true" />
<allow-http-request-headers-from domain="*" headers="*" secure="false" />
</cross-domain-policy>
I’d like to see options via the SDK or the admin UI that allow me to turn on global access or per website access. This is something I would change rarely, so the min-bar for me would be something on the Azure web site that allows me to manage the crossdomain.xml contents. I’ve already posted something to the forums and will be interested to see what happens next.
I'm a Microsoft Regional Director!
Posted by Scott Seely in Uncategorized on June 29, 2010

If you are LinkedIn with me, you’ll see that I listed a new position: Microsoft Regional Director – Chicago. This is NOT a paid position with Microsoft. I noticed that other RDs I have as connections do list the RD role as current with Microsoft as the current employer. LinkedIn doesn’t have a mechanism to show that I’m connected to Microsoft as a ‘unpaid opinionated person.’
I’m pretty excited about this opportunity. In 1995 (the year I graduated college), I looked out at the technology landscape and noticed that software developers can do well if they pick a technology stack and learn it as deeply as possible. At that time, it looked like I should either learn Sun Unix, some embedded toolkit, or Microsoft Windows. Microsoft had more desktops and I wasn’t an electrical engineer, so that’s the direction I picked. Since then, I’ve written books on the technology, lots of articles, and even did a six year stint at Microsoft. I’m happy I chose this path. I’m also want to see the Microsoft community continue to thrive and do well. A large, happy, well-informed Microsoft community benefits everyone in it. Thanks to my being a really busy person in the Chicago area, the local Microsoft office nominated me for the Regional Director position and I accepted.
This means I’ve taken on more unpaid work- kind of par for the course for me. If you are in the Chicago/Milwaukee/Madison area and want a Regional Director type person to come in and talk to your company, run/architect/develop a project, or just speak at a user group, send me a message: scott.seely@friseton.com.
The first in my PSOD series is out
Posted by Scott Seely in Uncategorized on June 27, 2010
I’ve spent the last few weeks working on some recordings for Pluralsight on Demand. I’ve been working on a course, .NET Distributed Systems Architecture. It was published this Friday. Please give it a listen and tell me what you think!
Azure Storage is a RESTful service
Posted by Scott Seely in Uncategorized on June 24, 2010
Today I had to build a demo for Azure and I noticed that I was following a tired old path where one demonstrates Azure storage services (Table/Queue/Blob) via a hosted application. My demo has two key points:
1. Look, there’s a picture that I uploaded!
2. Look, these processes can send messages via the queue!
The kicker was going to be that the messages are exchanged over the Internet, not the demo environment. I wanted a visible resource, an image of penguins, to be visible via a public URL. That was going to be the cool part.Then, I thought- “Well, the Azure SDK is just a bunch of libraries. The libraries should work fine in a console application, right?” Right!
And guess what: the demo actually works pretty slick because I can demo the storage service in isolation. I don’t need to demo it with a deployed application. That helps me out, and gives me some ideas on how I can use Azure differently.
I started out with a little utility class that reads a connection string from a config file and returns a ready to use CloudStorageAccount instance.
public static class Utility { private static CloudStorageAccount _account; public static CloudStorageAccount StorageAccount { get { if (_account == null) { _account = CloudStorageAccount.Parse( ConfigurationManager.AppSettings["storageAccount"]); } return _account; } } }
The config is just the following (minus the line breaks in the ‘value’ value):
<appSettings> <add key="storageAccount" value="DefaultEndpointsProtocol=https;
AccountName=[your account name];
AccountKey=[your account key]"/> </appSettings>
My scenario is this: I have a directory with images. I want to force those images to be uploaded to Azure blob storage. This needs to happen from the local machine. I was really surprised how easy this is to do. The Microsoft.WindowsAzure.StorageClient assembly has all the code you need to make this work. To upload the images and make them visible to the public, I just wrote the following:
static void Main() { var client = Utility.StorageAccount.CreateCloudBlobClient(); var dirInfo = new DirectoryInfo(Environment.CurrentDirectory); var cloudContainer = new CloudBlobContainer("friseton", client); var permissions = new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob }; cloudContainer.CreateIfNotExist(); cloudContainer.SetPermissions(permissions); foreach (var fileInfo in dirInfo.EnumerateFiles("*.jpg")) { var blobRef = cloudContainer.GetBlobReference(fileInfo.Name); blobRef.DeleteIfExists(); blobRef.UploadFile(fileInfo.FullName); } }
In this way, you can use Azure blob storage the same way you use Amazon’s S3. The queue can be used like the Simple Queue Service, and table can be accessed liked SimpleDB.
Yes, I understand that this has been possible for Azure all along. It finally clicked in my noggin that developing for Azure really can mean just picking and choosing the parts you want to use. You don’t need to go all in to use the service. Instead, just pick the parts that make sense and build apps!
For those that are curious, here is the queue demo too. It uses an object, Name, to send messages. This could be ANY object, I just picked something simple for demo purposes.
[DataContract] public class Name { [DataMember] public string FirstName { get; set; } [DataMember] public string LastName { get; set; } }
It is sent to the queue with this code:
static void Main() { var client = Utility.StorageAccount.CreateCloudQueueClient(); var cloudQueue = new CloudQueue( Utility.StorageAccount.QueueEndpoint + "friseton", client.Credentials); cloudQueue.CreateIfNotExist(); var name = new Name {FirstName = "Scott", LastName = "Seely"}; var stream = new MemoryStream(); var writer = XmlDictionaryWriter.CreateBinaryWriter(stream); var ser = new DataContractSerializer(typeof (Name)); ser.WriteObject(writer, name); writer.Flush(); var buffer = new byte[stream.Length]; Array.Copy(stream.GetBuffer(), buffer, stream.Length); var message = new CloudQueueMessage(buffer); cloudQueue.AddMessage(message, TimeSpan.FromHours(1)); }
And read from the queue thusly:
static void Main() { var client = Utility.StorageAccount.CreateCloudQueueClient(); var cloudQueue = new CloudQueue( Utility.StorageAccount.QueueEndpoint + "friseton", client.Credentials); cloudQueue.CreateIfNotExist(); var ser = new DataContractSerializer(typeof(Name)); var timeToStop = DateTime.Now + TimeSpan.FromMinutes(2); while (DateTime.Now < timeToStop) { if (cloudQueue.RetrieveApproximateMessageCount() > 0) { var message = cloudQueue.GetMessage(); var buffer = message.AsBytes; var writer = XmlDictionaryReader.CreateBinaryReader(buffer, 0, buffer.Length, XmlDictionaryReaderQuotas.Max); var name = ser.ReadObject(writer) as Name; if (name != null) { Console.WriteLine("{0} {1} {2}", name.FirstName, name.LastName, message.InsertionTime); } cloudQueue.DeleteMessage(message); } Thread.Sleep(TimeSpan.FromSeconds(1)); } }
And yes, the queue code also works from your local machine. The example above does require you to have the Azure SDK installed.
Speaking at Chippewa Valley .NET Users' Group Tomorrow
Posted by Scott Seely in Uncategorized on June 23, 2010
I’ll be doing a beginner’s talk on WCF tomorrow night at the Chippewa Valley .NET Users’ Group. Details are here: http://cvnug.wi-ineta.org/DesktopDefault.aspx?tabid=73. I’m looking forward to meeting some new people!
Converting a number to an arbitrary Radix
Posted by Scott Seely in Uncategorized on June 21, 2010
One of the things that is great about integers and longs is that they are easy to remember. However, when using this as identifiers that a human should be able to type in, they leave a lot to be desired. A integer in the billion range requires a user to enter 10 digits correctly. That’s hard to read, hard to keep your place, etc. There is a solution to this issue: represent the data using a radix other than 10. For English speakers, a radix of 36 is easily readable and maximizes the density while allowing for case insensitivity (no one wants to remember if they should type z or Z!).
Consider this, the value for int.MaxValue is written out as:
2147483647
As base 36, it is
zik0zj
6 characters instead of 10. Nice!
To do this, I wrote a simple function that converts a long (64 bits!) to any radix between 2 and 36. This is a basic first or second semester CS problem, I know. Still, this code is handy to have when you need it for converting numbers into something a person can type in:
static string ConvertToString(long value, int toBase) { if (toBase < 2 || toBase > 36) { throw new ArgumentOutOfRangeException("toBase", "Must be in the range of [2..36]"); } var values = new List<char>(); for (var val = '0'; val <= '9'; ++val) { values.Add(val); } for (var val = 'a'; val <= 'z'; ++val) { values.Add(val); } var builder = new StringBuilder(); bool isNegative = false; if (value < 0) { value = -value; isNegative = true; } do { long index = value%toBase; builder.Insert(0, values[(int)index]); value = value/toBase; } while (value != 0); if (isNegative) { builder.Insert(0, '-'); } return builder.Length == 0 ? "0" : builder.ToString(); }
And, to go the other way:
static long ConvertToLong(string input, int fromBase) { if (fromBase < 2 || fromBase > 36) { throw new ArgumentOutOfRangeException("fromBase", "Must be in the range of [2..36]"); } if (string.IsNullOrEmpty(input)) return 0; input = input.Trim(); var values = new List<char>(); for (var val = '0'; val <= '9'; ++val) { values.Add(val); } for (var val = 'a'; val <= 'z'; ++val) { values.Add(val); } var builder = new StringBuilder(); bool isNegative = false; int startIndex = 0; if (input[0] == '-') { isNegative = true; ++startIndex; } long retval = 0; for(int index = startIndex; index < input.Length; ++index) { retval *= fromBase; bool found = false; for (int number = 0; number < fromBase; ++number) { if (input[index] == values[number]) { retval += number; found = true; } } if (!found) break; } if (isNegative) { retval = -retval; } return retval; }
Reading a WebResponse into a byte[]
Posted by Scott Seely in Uncategorized on June 15, 2010
This question came up on Twitter. I’m posting the solution here for posterity. How do you read a non-seekable Stream into a byte[]? Specifically, a HttpWebResponse? Like this:
class Program{ static void Main(string[] args) { var request = WebRequest.Create("http://www.scottseely.com/blog.aspx"); var response = request.GetResponse() as HttpWebResponse; var stream = response.GetResponseStream(); var buffer = new byte[int.Parse(response.Headers["Content-Length"])]; var bytesRead = 0; var totalBytesRead = bytesRead; while(totalBytesRead < buffer.Length) { bytesRead = stream.Read(buffer, bytesRead, buffer.Length - bytesRead); totalBytesRead += bytesRead; } Console.WriteLine(Encoding.UTF8.GetString(buffer, 0, totalBytesRead)); } }
XmlDictionary and Binary Serialization
Posted by Scott Seely in Uncategorized on June 13, 2010
One of the interesting things that came out of WCF is the improvements in Infoset serialization. In particular, WCF introduced a format for binary serialization which reduces space concerns for objects. One of the keys to saving space is the notion of an XmlDictionary. The WCF serialization folks asked the questions:
How much could we reduce the size of a message if we allowed the parties communicating to exchange metadata about the messages?
What if we could reduce the size of messages by exchanging aliases for the XML Infoset node names?
The result of this what if experiment is the XmlDictionary and XmlBinaryWriterSession. The mechanism is astonishingly simple. Assume that both ends have a mechanism for exchanging information about what to call the two parts of a QName: name namespace and the name of the node. Instead of sending namespace:element qualified items, send aliases. This works well in WCF messaging and happens whenever you send messages over the binary serializer. You can also use this in your own code that uses a binary serializer. The only requirement is that the serializer and deserializer have to agree on the makeup of the XmlDictionary. Let’s start by looking at some code that does plain old binary serialization.
We start with an object:
[DataContract(Namespace = "http://www.friseton.com/Name/2010/06")] public class Person { [DataMember] public string FirstName { get; set; } [DataMember] public string LastName { get; set; } [DataMember] public DateTime Birthday { get; set; } }
I then have a ‘driver’ program:
static void Main(string[] args) { var person = new Person { FirstName = "Scott", LastName = "Seely", Birthday = new DateTime(1900, 4, 5) }; var serializer = new DataContractSerializer(typeof (Person)); Console.WriteLine("Serialize Binary: {0} bytes", SerializeBinary(person, serializer).Length); Console.WriteLine("Serialize Binary with Dictionary: {0} bytes", SerializeBinaryWithDictionary(person, serializer).Length); }
The application emits the size of the streams when each object is written out. The first, SerializeBinary, does not use a dictionary. As a result, it won’t have access to the aliases and must instead write out the full object.
private static Stream SerializeBinary(Person person, DataContractSerializer serializer) { var stream = new MemoryStream(); var writer = XmlDictionaryWriter.CreateBinaryWriter(stream); serializer.WriteObject(writer, person); writer.Flush(); return stream; }
In this case, we get a stream which contains 146 bytes. That’s pretty poor considering that we are interested in 10 characters (28 bytes: each string has a 4 byte length and then 2 bytes/character) and a simple DateTime representation (4 bytes). Can we make this smaller? How close can we get to 32 bytes? The answer: really close!
The version of SerializeBinaryWithDictionary that I wrote is verbose: it contains a number of lines that show what is going on internally. Your own code may be as long, but would include the lines as debug output.Please note that you need to include a reference to the XMLSchema-instance namespace in your dictionary so that both the reader and writer agree on the value of this attribute.
private static Stream SerializeBinaryWithDictionary(Person person, DataContractSerializer serializer) { var stream = new MemoryStream(); var dictionary = new XmlDictionary(); var session = new XmlBinaryWriterSession(); var key = 0; session.TryAdd(dictionary.Add("FirstName"), out key); Console.WriteLine("Added FirstName with key: {0}", key); session.TryAdd(dictionary.Add("LastName"), out key); Console.WriteLine("Added LastName with key: {0}", key); session.TryAdd(dictionary.Add("Birthday"), out key); Console.WriteLine("Added Birthday with key: {0}", key); session.TryAdd(dictionary.Add("Person"), out key); Console.WriteLine("Added Person with key: {0}", key); session.TryAdd(dictionary.Add("http://www.friseton.com/Name/2010/06"), out key); Console.WriteLine("Added xmlns with key: {0}", key); session.TryAdd(dictionary.Add("http://www.w3.org/2001/XMLSchema-instance"), out key); Console.WriteLine("Added xmlns for xsi with key: {0}", key); var writer = XmlDictionaryWriter.CreateBinaryWriter( stream, dictionary, session); serializer.WriteObject(writer, person); writer.Flush(); return stream; }
The size difference is striking: we shave off 108 bytes by using the dictionary. We are getting close to the same size as the memory footprint of the object data! The cool bit: you can use this in your own code. The dictionary needs to be shared between the reader and writer sessions (there is a corresponding XmlBinaryReaderSession which can also be populated from the common dictionary via the deserialization process). For posterity, the output of the program is:
Serialize Binary: 146 bytes
Added FirstName with key: 0
Added LastName with key: 1
Added Birthday with key: 2
Added Person with key: 3
Added xmlns with key: 4
Added xmlns for xsi with key: 5
Serialize Binary with Dictionary: 38 bytes
A slightly different version that shows both reading and writing with a shared understanding of what the dictionary looks like follows:
private static Stream SerializeBinaryWithDictionary(Person person, DataContractSerializer serializer) { var strings = new List<XmlDictionaryString>(); var stream = new MemoryStream(); var dictionary = new XmlDictionary(); var session = new XmlBinaryWriterSession(); var rdr = new XmlBinaryReaderSession(); var key = 0; strings.Add(dictionary.Add("FirstName")); strings.Add(dictionary.Add("LastName")); strings.Add(dictionary.Add("Birthday")); strings.Add(dictionary.Add("Person")); strings.Add(dictionary.Add("http://www.friseton.com/Name/2010/06")); strings.Add(dictionary.Add("http://www.w3.org/2001/XMLSchema-instance")); Console.WriteLine("Added xmlns with key: {0}", key); var writer = XmlDictionaryWriter.CreateBinaryWriter( stream, dictionary, session); foreach (var val in strings) { if (session.TryAdd(val, out key)) { rdr.Add(key, val.Value); } } serializer.WriteObject(writer, person); writer.Flush(); stream.Position = 0; var reader = XmlDictionaryReader.CreateBinaryReader(stream, dictionary, XmlDictionaryReaderQuotas.Max, rdr); var per = serializer.ReadObject(reader) as Person; writer.Flush(); return stream; }
Looking at the above, we can also account for the missing 6 bytes in our serialization: the extra 6 bytes are names of the nodes.
Speaking at Chicago Architects Group May 18
Posted by Scott Seely in Uncategorized on May 17, 2010
I’ll be speaking at the Chicago Architects Group on May 18 over at the ITA (next to Union Station in Chicago- corner of Adams and Wacker). My topic is Azure for Architects. In this talk, I go over how to look at and use Azure from a software architecture point of view. Unlike most Azure talks, this one has no code in it-just concepts. This isn’t the type of talk I normally give, but given the crowd, architecture and slides will work better than whiz bang demos.
The slides are here if you want them. I tend to use slides as guideposts when I present. Please don’t look at these slides as notes. 80% of the presentation is in what I say, not in what you can read. I’ll try to record the presentation as well and will put up the recording if the quality is good enough. There are still some seats open. Register at http://chicagoarchitectsgroup.eventbrite.com.
You must be logged in to post a comment.