Scott Seely

Unknown's avatar

This user hasn't shared any biographical information

Homepage: https://scottseely.wordpress.com

Some Words About Hosting on EC2

I wish I could say that their is something amazing that I did differently from everyone else who has hosted on EC2, but that’s just not so. Once you get an EC2 instance running, the rest of the process is identical to connecting to a virtual machine anywhere. If you need to add additional Windows components, the directions in this post are dynamite. If you use the management console, the directions on the Amazon article are pretty easy to map to actions in the UI without much thought. So, you are deploying to a virtual machine that is easily duplicated across the Amazon EC2 infrastructure. I would say that the strongest reason to use EC2 is that you already know how to use it. A second reason to use EC2 over your own data center is this: if you are using S3 and SimpleDB, you don’t pay for data transfers within the Amazon data center (but you do pay for data entering and leaving the data center!).

I installed the F# SDK on to the VM, added my web app to the inetpub/wwwroot folder, and made the application into an application root in the IIS Manager. I’d go into details, but these things are all well understood to Web developers. (If you need an assist, e-mail me.)

If you are hosting files and other items on Amazon, it will make sense to use EC2 to host your applications. If you don’t want to go out and buy a bunch of servers, rack them, and find a lot of high quality bandwidth, EC2 makes sense. If you are hosting your applications on an individual server that you rent by the month, you definitely should switch to EC2 (it’s almost the same thing only much less expensive).

Overall, I wasn’t ‘wowed’ by EC2. It’s a VM running on a machine I rent-nothing more. It’s strength is it’s simplicity.

All that said, it’s not my favorite environment for hosting applications on the web. EC2 doesn’t simplify my life-I still have to maintain patches on my own VMs. That’s not horrible and it does give me more control over when updates happen, but it doesn’t simplify things. Perhaps I haven’t been burned often enough by other people’s patches?

Leave a comment

Accessing S3 Through .NET

(You can review my description of S3 here.)

Because Amazon’s Simple Storage Service, S3, has been around for quite a while, the community has built a large number of libraries to access S3 programmatically. The C# space is pretty well crowded as well, and many folks wrote libraries that are good enough for their needs, then released those libraries to the world. The trick in picking an S3 library does not revolve around picking a best library. Instead, it involves finding one written by someone who had a similar set of needs. When looking for a library, think about how you want to add, update, and delete objects in S3. Write down the use cases. Then, download a set of libraries and keep messing with them until you find one that matches your needs. My planned usage involves:

  1. Create a bucket once, when the application first runs. (This step can be done as part of initial setup, then never done again.)
  2. All the objects for a given user have a prefix mapping to the user’s e-mail address.
  3. Given a user ID, I need to be able to query for that user’s complete set of images (and no one else’s!)
  4. Given a picture ID, I need to delete the image from S3.

Overall, pretty simplistic. However, I found that many libraries didn’t look at how to iterate over a given prefix. After kissing a number of frogs and getting no where, I found a library called LitS3. Given that I didn’t have any other complex requirements, like setting ACLs on objects, I was able to use this library and nothing else to insert, delete and query objects in my S3 bucket. LitS3 provides a simple object, LitS3.S3Service, to manipulate S3. You initialize the S3Service instance with the access key and secret so that the instance can manipulate your buckets. My initialization code is pretty basic and config driven:

let s3 =

    let s3svc = new S3Service();

    s3svc.AccessKeyID <-ConfigurationManager.AppSettings.["AWSKey"]

    s3svc.SecretAccessKey <- ConfigurationManager.AppSettings.["AWSSecret"]

    s3svc

Let’s take a look at the code I wrote to satisfy my 4 objectives using this object.

Create a Bucket

The S3Service class has a method, CreateBucket(string bucketName), to create an S3 bucket. Instead of figuring out how to create the bucket via a properly formatted URL, I cheated and wrote this code that should only need to run once, just before running any other code:

if (createBucket) then

    s3.CreateBucket(bucketname)

After that, I set createBucket to false so that this line wouldn’t run any longer. On Amazon, you get charged by how much work their systems have to do. This incents you, the developer, to use as little extra resources as possible. You can execute the call to create a bucket as often as you like without any harm. However, this is an expensive call and calling it too often will cost you, literally. If I was creating the bucket more frequently, I’d probably store the setting in SimpleDB and have the application key off of the data in SimpleDB instead.

Adding an Object to S3 (based on User ID)

When a user uploads an image, they use the ASP.NET FileUpload control. The server-side code converts the image to a JPEG that fits in a 300×300 pixel square and stores that version in S3. To save an object to S3, you need to hand over an array of bytes, a bucket name, a key name, a MIME type, and an ACL for the item. In this implementation, all images will be saved as being publically visible. The Identity of a user is their e-mail address. S3 doesn’t like some characters

member this.UserPrefix = this.User.Identity.Name.Replace("@", ".at.")

 

member this.SaveUploadedImage() =

    let usernamePath = this.UserPrefix + "/" + Guid.NewGuid().ToString("N") + ".jpg"

    let filebytes = this.uploadImage.FileBytes

    let ms = new MemoryStream(filebytes)

    let image = Image.FromStream(ms)

    let gfx = Graphics.FromImage(image)

    let size =

        let dHeight = Convert.ToDouble(image.Height)

        let dWidth = Convert.ToDouble(image.Width)

        if (image.Height > image.Width) then

            new Size(Convert.ToInt32(dWidth * (300.0 / dHeight)), 300)

        else

            new Size(300, Convert.ToInt32(dHeight * (300.0 / dWidth)))

    let resized = new Bitmap(image, size)   

    let saveStream = new MemoryStream()

    resized.Save(saveStream, System.Drawing.Imaging.ImageFormat.Jpeg)

    saveStream.Flush()

    s3.AddObject(new MemoryStream(saveStream.GetBuffer()),

        bucketname, usernamePath, "image/jpeg", CannedAcl.PublicRead)

    ()

List Object Keys (Based on User ID)

Given a user ID, LitS3 makes it trivial to retrieve all of the names of objects for the user. The code is a one liner:

s3.ListObjects(bucketname, this.UserPrefix)

Each ID, coupled with the rule that all im
ages live right under a prefix based on the user’s e-mail address, gives me an easy way to then calculate the full path to each image. The images are returned using an ImageItem structure who can retrieve it’s other fields (stored in SimpleDB). Since we covered SimpleDB earlier (overview, insert/update, query, and delete), I’ll skip that information for ImageItem. ImageItem has the following DataContract:

[<DataContract>]

type ImageItem() =

    [<DefaultValue>]

    [<DataMember>]

    val mutable ImageUrl: System.String

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable ImageId: System.String

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable Description: System.String

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable Caption: System.String

 

    [<DefaultValue>]

    [<DataMember>]

    val mutable UserName: System.String

(plus some other functions to read and write to SimpleDB)

The application takes the result of ListObjects and creates a Sequence of ImageItems to return to the caller (using WCF + JSON).

[<WebInvoke(Method = "POST")>]

[<OperationContract>]

member this.GetImagesForUser() =

    s3.ListObjects(bucketname, this.UserPrefix)

    |> Seq.map

        (fun x ->

            let imageItem = new ImageItem()

            imageItem.ImageId <- x.Name

            imageItem.ImageUrl <- "http://s3.amazonaws.com/&quot; +

                                  bucketname + "/" + this.UserPrefix + x.Name

            imageItem.FetchData

            imageItem)

Deleting an Object from S3

To remove an object from S3, you just need to have the key to the object and proper authorization. The owner of the bucket always has proper authorization to delete an object. When performing any cleanup, make sure to also remove any data stored elsewhere about the object.

[<WebInvoke(Method = "POST")>]

[<OperationContract>]

member this.DeleteImage(imageId: System.String) =

    let imageItem = new ImageItem()

    imageItem.ImageId <- imageId

    let result = imageItem.Delete

    if (result) then

        s3.DeleteObject(bucketname, this.UserPrefix + imageId)

    ()

With that last bit, we can manage objects in S3.

Leave a comment

Ordering from the Dell Outlet

I ordered a laptop from the Dell Outlet a few days ago. The Dell Outlet has a well known issue in their system: a package’s order status will correctly move to Shipped when the package is assigned a FedEx tracking number and moved to the loading dock but the carrier and tracking information will forever say Data Temporarily Unavailable. I say FedEx because most of us using the outlet will also opt for the free 3-5 day shipping which, in 2009, is provided by FedEx Ground for United States shipments.

Knowing that they use FedEx, you can look up the information without calling up Dell customer service. How do you do this? When you placed your order, you should have received an order number. If you don’t have the information handy, login to the Dell site under the My Account link (usually on the top of the page on the Dell site). Then, look at Order Status and select your order. You should eventually see a page with this information on it:

Order Information
Full Name: SCOTT SEELY
Customer Number: 99999999
Dell Purchase ID: 200043XXXXXXX
Order Number: 729877XXX
Order Date: 4/29/2009
Order Status: Shipped

(yes, the Customer Number, Purchase ID, and Order Number have been edited.)

Copy the Order Number and head over to FedEx’s site to their Track by Reference page. You probably don’t have and definitely don’t need an Account Number, so skip the first field. Then, put the Order Number into the field labeled Enter reference. For the approximate ship date, enter in the Order Date-accuracy isn’t important since the date is only used to scope the reference/Order number to a subset of all FedEx shipments. Set Destination Country to United States and put in your zip code/postal code in the Destination postal code box. Lastly, click on Track and you should see your Dell Outlet item on your PC.

Of course, you can also call up customer support at Dell, but do you really want to wait for an answer when, through this method, you can easily compulsively check to see if you new laptop|netbook|PC is at your door yet? I know what I did!

Leave a comment

Amazon’s Simple Storage Service (S3)

This week, we close out our look at the Amazon Web Services implementation of the AppEngine Photo Application. S3 is perhaps one of the best known, most used services on AWS. Before we discuss using S3, I need to cover some basic terminology. The key words to know are bucket, object, and key. A bucket contains zero or more objects. An object is associated with one key.

Buckets have names and are associated with an AWS account. Upon creating a bucket, you also create an addressable resource on the Internet. For example, I can create a bucket named MyS3Bucket. Upon creating that resource, S3 enables a new URI at http://MyS3Bucket.s3.amazonaws.com and at http://s3.amazonaws.com/MyS3Bucket. The bucket can be public or private. A public bucket can be accessed by anyone whereas a private bucket requires a token to access any contents. The owner of the bucket as well as others the owner authorizes can access private contents.

A bucket has little use if it is empty. Buckets contain keyed collections of bytes called objects. Each object has a key. The key is a string. For example if the key is myfile.jpg, S3 makes the object accessible at

http://MyS3Bucket.s3.amazonaws.com/myfile.jpg

and at

http://s3.amazonaws.com/MyS3Bucket/myfile.jpg

From S3’s point of view, the key is just a string. That string may contain embedded forward slashes, /. This feature allows one to store objects in S3 using what appear to be paths. Let’s assume that we want to keep videos separated from photos. When adding a video object to S3, append the string video/ to the name of any videos and photo/ to the name of any photos. Doing this for the image above makes the URI for the user look like this:

http://MyS3Bucket.s3.amazonaws.com/photo/myfile.jpg

and

http://s3.amazonaws.com/MyS3Bucket/photo/myfile.jpg

S3 allows you to query against keys in the buckets and find the common prefixes before a particular delimiter, such as the forward slash /.Through this mechanism, you can effectively list the contents of a given prefix. Various tools utilize this functionality to allow one to browse S3 like any other directory based file system.

When uploading an object to S3, you indicate the Content-Type of the object. S3 remembers this information so that it can set the HTTP Content-Type header when someone later requests the object from S3.

Up next, we’ll look at how to use S3 from .NET!

Leave a comment

SimpleDB Delete

This post uses Amazon’s SimpleDB C# library. Deleting a record is really simple. Again, this is to delete an ASP.NET MembershipUser from SimpleDB. To do this, you simply pass the ItemName to a DeleteAttributesRequest(). Because that name is unique within the domain, the values will go away.

override this.DeleteUser(username, deleteAllRelatedData) =

    PhotoWebInit.InitializeAWS

    let simpleDb = PhotoWebInit.SimpleDBClient

    let delAttr = new Model.DeleteAttributesRequest()

    delAttr.DomainName <- PhotoWebInit.domainName

    delAttr.ItemName <- username

    let delResponse = simpleDb.DeleteAttributes(delAttr)

    true

Pretty easy, right?

Next week, we’ll finish this up with a post on Simple Storage Service and a zipped copy of the source code (in case you want to see everything in one place).

Leave a comment

Silverlight/AppEngine InformIT Article in the Works

One of the most popular posts on my blog is Hosting Silverlight on Google App Engine from back in March. With my .NET REST book officially hitting shelves last week, InformIT contacted me to write an article (or more!) to help promote the book and me. I suggested expanding the information to show how a Silverlight application could use App Engine as a platform and they said “sure”. Turnaround on the article will be tight-I owe it to them by May 8. I’ll announce when the article goes live. But for those of you waiting for the follow up, it’s coming! I’m writing the Silverlight side in C# to make sure it’s accessible to everyone. The AppEngine part is in Python.

I appreciate all the visitors I’ve been getting and the positive feedback. You are awesome!

Leave a comment

SimpleDB Retrieve

I recently talked about using SimpleDB to save or update a record. Today, we look at how to query against records in SimpleDB while using the Amazon SimpleDB C# Library. Each record in SimpleDB has an ItemName (unique, primary key) and a set of attributes (name-value pairs). Upon insert or update, all fields are indexed for easy querying. You can access the data using the Query API or Select API. Since I am already familiar with SQL, I picked the Select API as it closely resembles the standard SQL SELECT statement.

Recall that SimpleDB stores data in domains. Any query goes against the stored domain. For example, I know that the MembershipUsers in the domain for my sample ASP.NET MembershipProvider all have a unique email address in a field named email. I also know that the email only exists on one record collection, so I shouldn’t be getting back any other record types. The actual lookup code is pretty simple:

let NormalizeUsername(username:string) =

  username.Replace("’", "”")

 

let LookupUser username =

  PhotoWebInit.InitializeAWS

  let normalizedName = NormalizeUsername(username)

  let simpleDb = PhotoWebInit.SimpleDBClient

  let selectStmt = new Model.SelectRequest()

  selectStmt.SelectExpression <-

    "select * from " + PhotoWebInit.domainName +

    " where email=’" + normalizedName + "’"

  let result = simpleDb.Select(selectStmt)

  let temp = new AwsMembershipUser()

  (temp.LoadFromSelect result)

NormalizeUserName takes care of embedded tick marks (and may be open to other attacks). Recall that this is NOT SQL, so a DELETE or UPDATE or DROP won’t do much of anything other than fail.

The values come back as attributes and get parsed with the following function (lines are numbered to workaround line wrapping):

    1 member this.LoadFromSelect (data: Model.SelectResponse) =

    2   let hasSelectResult = data.SelectResult.Item.Count > 0

    3   let hasAttributes = hasSelectResult && data.SelectResult.Item.[0].Attribute.Count > 0

    4   if (hasAttributes) then

    5   let attributeCollection = data.SelectResult.Item.[0].Attribute

    6   let providerName = PhotoWebInit.DefaultMembershipProvider

    7   let name = (this.SelectAttribute attributeCollection "email")

    8   let providerUserKey = (this.SelectAttribute attributeCollection "email")

    9   let email = (this.SelectAttribute attributeCollection "email")

   10   let passwordQuestion = (this.SelectAttribute attributeCollection "passwordQuestion")

   11   let isApproved = (PhotoWebInit.ParseBool (this.SelectAttribute attributeCollection "isApproved") false)

   12   let isLockedOut = (PhotoWebInit.ParseBool (this.SelectAttribute attributeCollection "isLockedOut") true)

   13   let creationDate = (PhotoWebInit.ParseDateTime (this.SelectAttribute attributeCollection "creationDate") DateTime.MaxValue)

   14   let lastLoginDate = (PhotoWebInit.ParseDateTime (this.SelectAttribute attributeCollection "lastLoginDate") DateTime.MaxValue)

   15   let lastActivityDate = (PhotoWebInit.ParseDateTime (this.SelectAttribute attributeCollection "lastActivityDate") DateTime.MaxValue)

   16   let lastPasswordChangedDate = (PhotoWebInit.ParseDateTime (this.SelectAttribute attributeCollection "lastPasswordChangedDate") DateTime.MaxValue)

   17   let lastLockoutDate = (PhotoWebInit.ParseDateTime (this.SelectAttribute attributeCollection "lastLockoutDate") DateTime.MaxValue)

   18   let passwordAnswer = (this.SelectAttribute attributeCollection "passwordAnswer")

   19   let password = (this.SelectAttribute attributeCollection "password")

 &#1
60; 20
     (new AwsMembershipUser(providerName, name, providerUserKey, email,

   21     passwordQuestion, System.String.Empty, isApproved, isLockedOut, creationDate, lastLoginDate,

   22     lastActivityDate, lastPasswordChangedDate, lastLockoutDate, passwordAnswer, password))

   23   else

   24     (new AwsMembershipUser())           

Finally, the helper functions that parse a date or boolean are:

let ParseBool value (defaultValue : bool) =

    let mutable retval = defaultValue

    let success = bool.TryParse(value, ref retval)

    retval

 

let ParseDateTime value (defaultValue : DateTime) =

    let mutable retval = defaultValue

    let success = DateTime.TryParse(value, ref retval)

    retval

(Is it obvious yet that I’m still an F# neophyte? Yes, I’m now grabbing the old F# books and reading them so that I develop some sense of style because the above is suboptimal.)

The Select API supports the standard equality operators:

  • >
  • <
  • <=
  • >=
  • =
  • !=

not makes an appearance to balance out like and is null (not like, is not null). You can also do range checking via the between operator, value checking against a set via in, and operations against multi-valued attributes using every(). A great set of examples is up on Amazon.

Leave a comment

Thinking about Costs of Cloud Worker models

This post is just a dump of a set of thoughts that have been running around in my brain.

All three major cloud platforms, Google App Engine, Amazon Web Services and Microsoft Azure, offer a way to run worker tasks. Google recently introduced cron jobs. Amazon has Elastic Compute Cloud. Azure has the worker role. Each of these mechanisms works in a similar manner: a process looks somewhere for work to do (in the distributed data base, a work queue, or elsewhere) and then performs the task. I don’t have any issues here-this all makes sense. You need a headless process to take input and produce output. This is a staple of most systems I have worked on. I have another issue-how much is this going to cost me? While many people are busy climbing the Gartner hype cycle and are close to the “Peak of Inflated Expectations,” I chose to enter at a personal “Trough of Disillusionment.” I believed the hype with SOAP, worked with the leaders on WS-* at Microsoft, and taught WCF for a while. In the process, I grew up. I’m working on living on the “Slope of enlightenment” as I learn what the platforms do and do not do. (Hopefully, I’m not fooling myself!)

At this point, I’m investigating when it will make sense to process worker information locally vs. in the cloud. At the end of the day, it comes down to costs. Of the big three, only Microsoft remains to announce their pricing, and those numbers will come out by the end of the year (2009). So, what does it cost per hour?

Amazon: $0.03/hour, $21.59/month

AppEngine: $0.10/cpu hour.

Microsoft has not announced costs. However, I have been advised to monitor the metrics Microsoft collects as those will likely be the things Microsoft uses to determine bills. Over the last 24 hours, I have had a WebRole and a WorkerRole running constantly. The metrics chart shows me consuming 2 virtual machine hours/hour. I’m fine with this so long as the baseline cost is competitive with web hosting. I’d probably spend as much as $20/month per VM in use for a given role to use this model. That’s the value to me for being able to hit web scale if and when my site gets to be popular. It’s hard to build in scalability, so I don’t want to face a rewrite/refactoring when moving from a web host environment to a cloud environment. I pays to start right.

If your current processing loads are at ~33% CPU usage, Amazon and Google are equal. However, if you have a new site where you processing usually finds 5 or 6 items waiting, processes those in a few seconds, and then waits 5 minutes, AppEngine might be a LOT cheaper. On a moderate transaction web site, you may only do 10 minutes of processing per HOUR, bringing your cron cost down to half the cost of Amazon.

It looks like Microsoft will be following the Amazon payment model. They will need to have a way to bring costs in line with AppEngine. I would prefer to see a model that bills me by CPU/Processor hour instead of VM hour. A VM hour can have very little usage whereas high CPU hours can adversely impact other VMs running on the same machine. Ideally, a cloud box would balance out based on required CPU, not number of machines, so these metrics should be available to those who run Microsoft’s data centers.

Thanks to all of my readers for sending great questions in e-mail (scott@scottseely.com) or via comments on the blog.

Leave a comment

IBM on EC2

Last night (4-23), I saw that Amazon is now offering IBM applications by the hour. I thought “Cool!” Then I took a look at the pricing for these things. This pricing doesn’t take effect if you already own IBM licenses for the products and just want to host on EC2. If you own licenses, IBM has a table up to show you how to convert from Processor Value Units (PVUs) to EC2. These prices are for preconfigured Amazon Machine Instances (AMIs) with the IBM software ready to rock-no extra salesmen need to get involved.

All that said, I have no idea how much a PVU costs for an application, but my guess is it costs “a lot”. A project I was on in 2007 required an IBM C Compiler to run on Z/OS (it was needed to interpret SQL statements into C programs that could run as stored procedures on DB2-how this even made sense any longer in 2007 is beyond me). IIRC, the cost there was over $18,000/year (note-this number is from memory and is likely low). I would guess that the services IBM is offering are more expensive. Looking at the charts and factoring out the cost for just renting an AMI by the hour, it appears that a PVU is worth about ~$0.004/hour for most products (discounting the base cost of an AMI and using the simple math of the High CPU Medium Instance that is 100 PVUs). The hourly PVU cost is a 5-15 times higher for Content Management Server ($0.021) and a WebSphere + Content Management Server Combo (~$0.06/PVU).

The numbers above are approximate and were done on a piece of paper so I could get a feel for costs. Before you make any decisions, make sure to do your homework. I am curious if the pricing differences seem about right for IBM products. I have nothing against their pricing model-they do great work for companies that consume software but where having the latest software and tools isn’t seen by management as a competitive advantage.

Finally, a note about Processor Value Units for those who have not worked with IBM packages in the past: a Processor Value Unit (PVU) is IBM’s way to work around per CPU and per user licensing. CPU manufacturers are busy adding cores and speeding up their chips. While this goes on, IBM looks at these new chips and states how much workload the CPU can handle in units called PVUs. When you buy a product such as DB2 and you need to allow 100 users access to the product, IBM can know how many PVUs you need for that many users. It’s sales team then makes sure you have the right hardware for this new workload with your current workload, and sends you a bill. Because IBM’s sales model is high touch, the PVU is one tool among many that enables their sales people to make sure the hardware and software needs are correctly matched. (Feel free to correct me if I’m mistaken-but this is how things appear after reading the literature on IBM’s site.) I could not find a standard price for a PVU (but I attempted to derive one). Again, because IBM is high touch, my guess is that the price of a PVU is negotiable depending on a number of factors including:

  • Size of account
  • If the account represents a conversion to IBM (competitive pricing)
  • Gut feel from the sales team

Understand that a high touch sales model allows both parties to come out ahead. This sales practice involves a lot of unpaid research and preparation by the sales team and support staff in an effort to match customer needs with what the sales organization can provide. This sales practice also minimizes the amount of money left “on the table” because the sales team gains a lot of inside knowledge about the client’s needs and wants.

Leave a comment

NHaml “gotcha”: Remember to set Build Action for haml files to Content

I’m using NHaml because, frankly, I wanted to try something that gives me easier to write markup. NHaml seemed about perfect. I started my earning experience last night and thought “this is cool”. I returned to the project again today and I forgot some simple basics. Because I’m also using Azure, I may be one of 5 people on the planet who have had this problem so far. When adding a file to an Azure project, you may want the file to be present in the deployment module. This means that after adding the .haml file, you need to set its Build Action property to Content. By default, the value will be None. If you forget to set the Build Action to Content, you will get the following error (in this case, for a page at /home/about):

The view ‘about’ or its master could not be found. The following locations were searched:
~/Views/home/about.aspx
~/Views/home/about.ascx
~/Views/Shared/about.aspx
~/Views/Shared/about.ascx
~/Views/home/about.haml
~/Views/Shared/about.haml

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.InvalidOperationException: The view ‘about’ or its master could not be found. The following locations were searched:
~/Views/home/about.aspx
~/Views/home/about.ascx
~/Views/Shared/about.aspx
~/Views/Shared/about.ascx
~/Views/home/about.haml
~/Views/Shared/about.haml
Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:

[InvalidOperationException: The view 'about' or its master could not be found. The following locations were searched:
~/Views/home/about.aspx
~/Views/home/about.ascx
~/Views/Shared/about.aspx
~/Views/Shared/about.ascx
~/Views/home/about.haml
~/Views/Shared/about.haml]
   System.Web.Mvc.ViewResult.FindView(ControllerContext context) +105521
   System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) +139
   System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult) +10
   System.Web.Mvc.<>c__DisplayClass11.<InvokeActionResultWithFilters>b__e() +20
   System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) +251
   System.Web.Mvc.<>c__DisplayClass13.<InvokeActionResultWithFilters>b__10() +19
   System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) +178
   System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +399
   System.Web.Mvc.Controller.ExecuteCore() +126
   System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +27
   System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7
   System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) +151
   System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) +57
   System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) +7
   System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +181
   System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75

Leave a comment