I usually write in English about technology and software - that’s what fascinates me. The recent protest of the young and intelligent people that are tired of the ruling oligarchy circles in Bulgaria inspired me to write this post. In this post I offer my view and possible solutions about some of the social and political problems.
Последните няколко дни наистина се изумявам от младите хора в България, които като че ли се събуждат от зимен сън, или просто им е писнало, или както аз се чувствам - просто чашата е преляла. Изумява ме, че целта на протестите тези хора не е да си платят сметките за ток или да изгонят този или онзи – целта им е да създадат по-добро и стабилно общество, в което могат да живеят те и техните деца. Вчера докато четох някой призиви в интернет за игнориране на провокациите и пизиви като този да помогнем на полицаите като им дадем по 1 бутилка вода в ужасните жеги… вчера наистина за момент изпитах гордост, от това че съм Българин – гордост от младите интелигентни и добри хора в България – радвам се че ви има!
Почти всеки човек в държавата знае какви са проблемите ни, но много малко хора дават решения за тези проблеми. Затова изключително ми допадна идеята на @peteriliev за това хора с идеи да предложат решение на проблемите на обществото под формата на блог-постове. В следващите редове ще предложа моето виждане за това къде са корените на проблемите и как може да ги решим.
- Дезинформираност или ниска прозрачност – или как никой в обществото незнае реално какво се случва в парламента, какви закони се приемат, как се приемат те, какви проблеми решва текущата промяна на текст в закона и има ли по-добър начин да се решат тези проблеми. Освен това обществото няма идея какви договори се подписват, на какви условия, кой е взел решението за тези договори и каква е мотивацията зад това решение спрямо другите оферти.
Аз лично бях ИЗУМЕН след като служебният министър Асен Василев съобщи че държавата има 12 000 мегавата мощности от които 5000 е цялата консумация и износ от България. Как няма да има висока цена на тока като имаме 7000 мегавата нахаост за които се плаща!?! И закакво забога ни е нужно АЕЦ Белене?!?
- Ниско участие на обществото в управлението – даже и да знаем, нищо неможем да направим. Имаме власт над това кой ще изберем да ни управлява следващите 4 години и толкова. Референдумите в сегашният им вид са тежки, бавни и скъпи(няколко милиона лева струва един референдум) и като цяло неефективни. А ако някой ни издразни значително много - излизаме на протест, сваляме го и избираме друга “тиква” на негово място.
Към това мога да прибавя мъртвите души, избирателите с ниско образование и купуването на гласове.
Вероятно има и други проблеми, но в тези според мен се коренят всички останали. В следващите редове ще дам и решенията на тези проблеми, както аз ги виждам. Може и да е затова, че съм програмист и като цяло човек на технологията, но искрено вярвам, че има много малко проблеми по света, които не могат да се решат чрез софтуер и/или иновативни нови технологии. Затова и решенията ми са с такава насоченост.
- Всички закони да се въведат във система за контрол на версиите както са направили в Германия(тук е и акаунта на Бундестага). Самоче там са го направили неофициално(май) докато при нас може да се въведе като официално изискване. Това ще позволи всеки да може да види не само законите, но и това как тези закони са се променяли във времето. Партиите, които предвиждат да правят промени по някой закон могат да въведат промените в системата за контрол на версиите преди да се въведени тези промени за гласуване в парламента. Тези промени могат също така да се аргументират – например “как свалянето на изискването за 10 години стаж в системата би спомогнало на шефа на ДАНС да е по-ефективен”. Всичко ще е достъпно и видимо online. Така хората които са заинтересовани ще могат да ги разгледат, коментират или предложат по-добър вариант. Ако изискването стане всяка промяна на текст или решение на парламента да се въвежда в системата минимум 1 седмица преди гласуването, мисля че хората ще могат да получат доста добра представа какви решения ще бъдат взети и защо. И няма да има такива ексцесии като решения относно държавната сигурност взети за 15 минути.
- Същото може да се направи и за договорите на НЕК и всички обществени поръчки. Решенията които се взимат за тях да са аргументирани и достъпни за всички, както и офертите.
Целта на това е да се получи пълна прозрачност на решенията, договорите, законите които се предлагат, така че хората да знаят и да се включват активно в процеса на правенето им.
- Електронна система за гласуване и електронно правителство. Чрез електронната система за гласуване може референдумите да станат евтини, бързи и като цяло ефективни. А чрез такива референдуми хората, избирателите могат много по-лесно да участват в политическия живот.
- Карта, парола или като цяло възможност за електронно гласуване може да се дава на граждани след предстявяне на диплома за основно(средно) образование и лична карта с навършени години.
- Сорс кода на електронната система да се направи с отворен код(open source) така че всички да са сигурни, че няма как да се манипулира вота.
Като цяло в основата на предложенията ми е идеята
Щом не можем да вярваме на политиците да ни управляват - да направим така че да участваме максимално в управлението като взимаме информирани и прозрачни решения.
Не е нужно да се стига до крайности. Може да се взимат само най-важните решения с референдум. Но референдумите да станат по-чести.
Хвърлям и няколко въпроса, върху които може да се мисли:
1. Къде да събират, обсъждат и развиват идеи като моите, на други блогъри, а надявам се и на други хора, които могат да са от полза на цялото общество?
2. Кой би могъл да разпространи и вкара в парламента такива толкова “нетрадиционни” идеи? Досегашните политически лица? Едва ли…
Надявам се, че съм дал някои идеи, насоки или материал за размишление на хората които четат тези редове. Според мен има начини и идеи да се подобри положението. Просто траябва да сме активни, да имаме мнение, което да отстояваме и да не се оставяме да ни тъпчат
A while ago I wrote about how to import an existing SVN repository to GitHub. The approach I used there describes how to import a folder with its full revision history. That however may not be needed or wanted in a lot of scenarios(for example - the SVN repository being too old). In that case we could do a partial import – import the changes made in the last year or a couple of months or the last “x” revisions of the SVN.
In that case we must do a couple of things:
- We must decide from which revision number on we want to import the history. You could check the revision numbers and dates in the “Show Log” option in TortoiseSVN or a similar one depending on the SVN software you use. We’ll call the revision number BeginNum.
- We must find the latest revision number. We’ll call it EndNum.
- We must change the “svn clone” command we used in my previous post adding a “–r “ option:
git svn clone –r BeginNum:EndNum –A <path-to-users.txt> <SVN-path-to-clone>
And that is it! Git will now import all the revisions beginning from revision number BeginNum to revision number EndNum without the ones before BeginNum.
A while ago I wrote about how to configure Azure website’s domain. That information however is a little outdated after the publishing of the new features of Windows Azure. I’ll now explain how to configure a A-record domain name for an Azure Website.
The domain for an Azure website could be configured only if the website is either in Shared on Reserved instance in Azure. So the first thing to setup is that. Go to “Scale” tab in the website’s properties and choose “Shared” or “Reserved” box under “Web Site Mode”.
After that you must go in the “Manage Domains” button in the bottom of the page and add the domain name you want to register under “Domain Names” label.
From this screen you must also copy the IP Address provided under “THE IP ADDRESS TO USE WHEN CONFIGURING A RECORDS” label.
After that you must go to your Domain provider and configure a A-record key that points to the IP you’ve copied from the Azure portal. I’m using a service called DNSimple to manage my domain names which I found through a recommendation by Scott Hanselman on one of his latest podcasts. I find the service very nice. Far better from other providers I’ve used.
The key for the A Record should look like the first row in this picture:
Also you could configure a URL redirect(the second row from the picture above) from your base URL to your “www.” URL like I did so that if someone wants to type “http://apostol-apostolov.com” they’ll be automatically redirected to “http://www.apostol-apostolov.com” or if you want the base URL to be your main one you could configure the base URL as A record and the ”www.” URL as a redirect to you base URL.
And that is all. We’ve now configured a domain with A Record pointing to a Windows Azure Website.
Some time ago I wrote about Windows Azure and the experience of hosting configuring an website with a real domain in Azure. The information I posted in that blog post is a little outdated so I thought to update it a bit with more information about the experience I had with Azure since then.
I am using Azure for about 5-6 months now and I’m very pleased with what I get for my money. But let me start from the beginning.
When I started using Azure the only option to hook up a real domain name to a website hosted in Azure was to have a reserved instance and also the only WAY to hookup the domain was to use a CNAME. So to host my 2 personal apps that only I use and to host this blog meant that I have to pay around 60 €/month for 1 small reserved instance. Now that was a little high so I struggled a bit between paying that rate for a single, not much used website and having the ability to change and deploy the website in seconds.
I was half month paying for reserved instance when I when Scott Gu’s posted the new improvements for Windows Azure. The update consisted of several things, but the ones I cared were the introduction of Shared instance and the ability to hook up a domain name to Shared instance with a A-record. The Shared instance’s cost is 10€/month and the Reserved instance’s cost is around 60€/month. The trick is that with the Shared instance you pay 10€/month once-per-website and the Reserved instance you pay around 60€/month for all your websites. So if you have more than 6 websites on a Shared instances you may as well buy one Reserved instance for all of them.
With these changes I’m now paying around 10€/month for a Shared instance for my blog and all my personal apps are on free instances(which every account has a limit of 10). So for me the cost dropped from 60€ to 10€ month, and that’s a pretty nice deal for all the features that Azure offers like easy website creation and administration, one-click deployments from Git and Visual Studio and the one-click scaling of a website on multiple instances.
Windows Azure makes it cheap if you don’t have much traffic(Shared instance) and easy to scale when you start growing and start to become big(multiple Reserved instances). What more do you need?
Now, when you see the default bundles organized by-library, your first thought is – “that’s default behavior, so it’s probably the one I should use ” and you jut go with the flow with it.
Sadly that leads to making a bundle for each library and calling @Scripts.Render method for each library that you’re using on the page. Which leads to this in production:
I think bundling should be able to combine all your files into one so we get the best possible optimization. A way of doing that is to make a bundle for each page you’re using with the necessary libraries in it.
If we do this however with the current structure of bundles in MVC 4, we wouldn’t be able to use a CDN for jQuery and the other of the common used libraries, because the CDN is declared on per-bundle basis.
A better approach would be to use different bundles for just the common libraries – jQuery, jQuery UI, Twitter Bootstrap etc. with CDN configurations. And for the custom libraries we could use a bundle for each screen with the custom libraries that each screen uses. So in your base _Layout file we could say:
And in each page we make the include:
Where bundles/index is a custom bundle definition for the Index.cshtml page, where you define all the custom libraries needed for the Index.cshtml page. The bundle definition could look like that:
That way we get both CDN support with the common libraries as well as the most effective and optimized bundle management for the custom libraries and files that change more frequently. Win-win situation. Or almost.
The downside of this approach is that if you want to put your libraries in your personal CDN, this method of organizing bundles wouldn’t be very effective, I guess, as you’d have to make a different file in the CDN for each bundle e.g. for each page.
A few days ago I blogged about Why bigger teams need Git/Mercurial. Now I’ll be showing how to import a simple SVN folder to a repository in GitHub.
Disclaimer: The method I’m describing here is the very basic way of importing a normal SVN folder to a GitHub repository. There may be more advanced configurations(if you used branching and tagging) of importing your SVN folder to Git which require different “git svn” command parameters. You can find more information about the different other options here.
The logical steps of the import are:
- Make a users.txt file with all the users that committed to the SVN folder.
- Clone the SVN repository to a local folder with “git svn”.
- Create a GitHub repository.
- Make the GitHub repository a “remote” to the cloned SVN repository.
- “Pull” the changes from the GitHub repository locally.
- “Push” the local repository with the merged changes and SVN history to GitHub.
We’ll now look at the specifics of each step.
For the import we’ll be needing a text file with a list of the users who contributed – who made the commits in the folder in SVN. The list should have the information for the username in SVN, real name of the person and the email address of the person. It should look like that:
After that we open a PowerShell in an empty folder and clone the SVN repository with the following command:
git svn clone –-no-metadata –A c:\path\to\users.txt svn://url-to-svn-server/path/to/folder
The command should look similar to this on your screen, given that you change the paths to the SVN and local users.txt file:
Then we can open the newly created folder with
After that we must make an GitHub repository. I’ll call the one I’ll be using “Git-SVN-Clone”. Then via the following command in PowerShell we can make a “remote” to the newly created GitHub repository so we would be able to push the current local repository with its history from SVN to GitHub. We’ll add the “remote” with the name of “server”:
git remote add server https://github.com/asapostolov/Git-SVN-Clone.git
In my case the remote repository url is “https://github.com/asapostolov/Git-SVN-Clone.git” but you can find yours in the page of you repository in GitHub:
After we’ve added a remote named “server” we must pull(pull is “fetch and merge”) the remote’s changes(which in our case is the initial commit for creating the repository on the server) to the local folder.
git pull server master
And then we must push the local changes to the server with:
git push --set-upstream server master
And that’s all. We now have a GitHub repository with all the history of SVN commits in it. In just a few lines of commands.
An issue came up, a while ago, in the company I work for. The issue was that the development team was expanding. It was expanding because some of the projects got in later and more mature stages of development, which is a natural thing.
The expansion of the teams led to a interesting thing however – using SVN was becoming a pain in the ass. What do I mean by that?
Before the expansion - every project in the company had one developer who was working on it. That meant a linear development of the projects – if someone finishes a functionality, bug or feature, the project easily becomes ready-to-publish. With the introduction of more than one developer on the project - the company was able to build more than one feature per project at the same time. That process however broke the linear progression of the projects. When one person finished their feature – the second person could be half way through his, so if you need to publish the new feature you either have to wait for everyone to end their work, or hide the unfinished functionality.
I want to clarify that the way the company is using SVN is to have a folder for each project and commit your changes to that folder. A feature is a series of commits. It’s pretty standard actually and I think a lot of people use SVN that way.
In this context I began searching for solutions. And I actually found several solutions. The more interesting are:
- Introduction of branches in SVN – when you need to make a big feature – you make a branch and do the feature. Then you merge with the main branch. After a bit of research on the matter however I found that this is not such a good option. Why? Well when merging in SVN, SVN tries to clash the two versions of the file you’re merging and combine them in one. So if someone has changed that file a little bit and you changed it – you get a merge conflict. Now imagine a three or four weeks of work of one man trying to clash in three or four weeks of work of another. Disaster. This leads to the second option.
- Working with no commits. I mean you update frequently and you commit when you’re ready with your feature. However – this means no one would be able to see your progress and no one will be able to help you with the feature. Also if you do something wrong at the end of the feature – you won’t be able to ‘revert’ your mistake. And it feels wrong too.
- Use a Distributed source control system(Git or Mercurial) instead of Subversion. I knew the distributed source control systems were better, but didn’t know why actually. So I began researching on the subject and found some great articles like the one from Joel Spolsky and this question in stackexchange, and a few others.
The big difference between SVN and Git\Mercurial and the solution of our problems was in the way the two source control systems worked. SVN tracks the VERSIONS of files and when merging - tries to unite the two versions you’re merging into one. Distributed source control systems like Git and Mercurial track the CHANGES in your files and when they begin merging they try to apply and combine every change you’ve made in your file with the changes the other people made to the file. That means that in theory – if you move a method and someone else changes the method contents without moving it – the source control should be able to figure that out and give you the result – the method moved and changed – without a conflict. That makes the merging of branches possible with very few conflicts. Nice. And that enables a lot of people working on the same features and changing the same files with a little effort and less pain.
It seems using a distributed source control system means you can SCALE your development very efficiently. Very nice.
I think there may be a fourth solution to the problem – trying to implement a part of Scrum agile methodology – the Sprint. We set a target for a sprint and make changes(commits) throughout that sprint and in the end of the sprint we have a stable version of the system which we publish. However that requires significant change in the processes of the company – of how we work, communicate, estimate and plan etc. It’s a lot more work and a lot more risk associated with this option.
After all we chose to use a distributed source control system – Git and particularly GitHub. It seems to have so much less pain associated with it compared to SVN.
However there is still the price of introducing it to the whole team, importing the source code from SVN and learning how to use Git PROPERLY. After all the source control is just a tool in the toolbox of a developer. And like every tool – if you don’t use it right it could give you a lot of headaches. But I think the whole effort will be well worth it at the end.
Lately as I’ve been developing a lot and trying to remember the key combinations of Visual Studio that I use the most, so I’d make my life easier and I’d be able to increase my productivity. I try to observe – when am I reaching for the mouse? Then I try to find the key combinations which would help me use the mouse less often. In the end – every reach for the mouse wastes time and focus.
I found that a lot of times when I use the mouse I try to reach for this menu:
And naturally I tried to find the key combination to open and use it. Here comes my little disappointment. All the other key combinations I needed to find - I could find on the internet. That’s because I knew what I was looking for – the name of the window/control/functionality to open. I didn’t find the name of that component, although there are no tooltips on the dropdowns. So I thought – let’s look around the keyboard and keyboard shortcuts options in Visual studio - I should be able to find it there. And then I found that:
So much for the user-friendliness.
Anyway after a couple of minutes of googgling I found the name of the dropdown with the class names and names of methods – it’s called a Navigation Bar. Naturally.
The thing that made me sad about it was that the shortcut for it is Ctrl+F2 and shortcut goes to the classes dropdown, so if I want to go to the methods dropdown I need to click also TAB. So the whole combination that I need is Ctrl+F2(which I find a little hard to reach) plus TAB. And clicking the combination resets the ‘methods and properties’ dropdown so it goes to the first element and changes the place of my cursor on the page. It’s not a great experience and not a friendly one at all.
I did this in Visual Studio 2010 some days ago and the day before yesterday I saw Visual studio 2012 with all the shiny new things. Now don’t get me wrong I really like Visual studio – 2010 and even more 2012. I really think that the teams that made them are one of the best developers in the world and I believe Visual studio is the best development environment in the world at the moment. But I was a little disappointed that the shortcuts and the options window for keyboard shortcuts are the same between Visual Studio 2010 and Visual Studio 2012. I really think that that part of Visual Studio could be made batter.
So to be constructive I’m giving several propositions regarding keyboard shortcuts in Visual Studio:
- Create more user-friendly options menu for Keyboard shortcuts. Have links or pictures or at least descriptions with a nice search for each shortcut. I think that it’s going to make more users want to use shortcuts. Which will make users more happy and more productive.
- Make the navigation bar’s “methods and properties” dropdown a ‘first class citizen’ of Visual Studio with an own shortcut. As of now - I cannot target this dropdown directly so I should use Ctrl+F2 plus TAB to get to it.
- The more annoying problem - do not reset the dropdown to the first element when I ‘TAB’ to it. The dropdown should be able to figure out in which method in the page I’m in and not reset my cursor but navigate the dropdown to the method that I’m in.
If I think of something else I’ll update the list.
I don’t think the ideas I’m proposing here are a major change. I hope the guys behind usability in the Visual Studio team would read this post and would love to think and improve on the things I’m proposing.
Or at least that’s what I would expect from a team that’s created such a great product as Visual Studio 2012.
As a .NET developer for a couple of years, I’ve read a lot of blogs and articles on the web. A big part of them were stating how essential for an app or project is to have Unit Tests. Actually they were concordant that the definition of legacy code is code that is not being unit tested.
Imagine my amusement now when I look back at my couple of years of software development and I cannot find a single project that I worked on that has unit tests in it. Not one. I’ve worked for 3 companies on several different projects. From Business process management software to Web sites and CMS to Enterprise applications etc. I would go so far to say that I even don’t know a person who’s working in a company(I have friends who are developers), that uses unit testing in their projects. But enough of that. My purpose is not to rant about how unit testing and automated tests are not widely spread.
I want to explore the field and show the basics of unit testing, how it’s done and why it’s important.
So let’s start with the definition of Unit test.
Unit test is a small, repeatable piece of code that tests one functionality - in most cases one method. It’s not testing how functionalities interact with each other. It’s not testing the environment, database or any other environment variables. (There are other types of tests that test for that.) It’s testing the business logic of the method – “What the method does”. With unit tests all the external or environmental dependencies has to be hidden away - stubbed or mocked.
But what is unit testing as a process?
Unit testing as a process essentially is automating the validation of every piece of business logic in your application through tests.
I’m sure you’re beginning to see the value in it. You can stop being scared of refactoring some bad piece of code –small or big. Also if your piece of code is tightly coupled with some other code – it makes unit testing it very hard. Unit testing is enforcing your code to be well structured. I’m sure there are many other benefits which I’m not able to think of right now, but will come up later.
In this post I’ve made some basic definition of what unit test is and what is the idea of the process of unit testing. In my next post I’ll choose the technologies and frameworks which I’m going to make my unit tests with and I’ll also discuss why did I choose them. After that I’ll continue with some practice – creating some simple unit tests on an MVC web app.
What do I mean by that? Well let me give you some history.
A While back I worked on a big enterprise platform/project where we did not have much experience using NHibernate. I was learning it in the process of writing the code and for the framework and everything looked peachy. I even sometimes found ways to optimize my code to make it 10+ times faster. Like having a session-per-request so the I wouldn’t open a transaction every time I make a query to the database(if you don’t know – transactions – very expensive).
One of the problems I think now was the notion that we were not creating an application but a platform, and then we were going to make applications on top of it. So we worked a couple of months without a client and then we began making a project for a client but we decided to ship it when it was fully ready. So for a long time we had no real input of how the system was working(we had no info if it’s slow or fast or if it had production problems).
Naturally I became a fan of Ayende because of all the awesome features of NHibernate that he posted in his blog and that were buried deep down in the NHibernate code.
Then one day I found Ayende’s NHibernate Profiler. If you don’t know what it is – it’s an application that connects to NHibernate’s logs and analyzes them. It shows you common and not so common pitfalls you have in your application. If you’re using NHibernate and not using NHibernate profiler – you’re doing it wrong.
So when I ran NHibernate profiler I was Like WOW! I was astonished by the number of problems we had in the application. SELECT N+1, not using paging almost everywhere etc.
So what was the real problem in this situation? We worked with objects. We didn’t know or didn’t gave a real thought about what really happens underneath NHibernate. How the queries are made and what happens in the DB with those queries. We didn’t give a thought about the data access. After all – we have abstracted it away, haven’t we?
I left that company a while back and now I’m not using NHibernate that much(because the infrastructure in the company I work for is not using NHibernate not that I don’t like it - I think it’s awesome actually). I had forgot about this story until I saw the recent announcement of Umbraco’s decision to stop the development of their new version of the CMS. At first I didn’t quite understand why they did this. They said something about their architecture being flawed on the Keynote event in which they announced their decision. They said also that the development team’s lack of experience with NHibernate was the part of the problem and that they shouldn't have made the new version ‘in the dark’ - away from the community(away from the customers). After that I read the review Ayende did on Umbraco’s new version. And guess what? It was like a Daja Vu. Yea maybe the business was different and the applications were different and the logics and teams and decisions were different but the root cause was the same. They didn’t thought about the data access.
And now we get to the point of this article. NHibernate is a gun. It gives you power and freedom to do all kinds of things. So if you see it for the first time or/and don’t know how to use it – you could easily point it in the wrong direction – like to your head – and you can shoot yourself with it. But if you know what NHibernate is and what it does and know what you’re doing and use it right. It will keep you safe and sound from the dangers of the jungle.
Disclaimer: The information in this post is a little outdated. Check out the more up-to-date information about configuring Windows Azure Websites with Domain.
Some time ago Windows Azure team posted a new 'version' to the the service. I was and still am very thrilled about the new features they put in there. I like the new interface very much and just love the TFS and GIT automatic deployments. Like the Azure team knew what would make the developers happy and more productive and they put it in there.
Naturally I began porting one of my sites(applications) on Windows Azure. I followed the procedure as it was described in the very detailed tutorials. If you haven’t tried it you should. It so natural and easy to use that I makes a big smile on your face.
Then I decided to point my domain to the Windows Azure website. So I searched the web to find some info and there was a good explanation in Azure’s blog. Now let me say – I’m not very good with managing domains etc. Just a little less than a year ago I bought my first domain and made some tests with it. Never used it much as a whole.
And here was my first disappointment – we CAN point a domain to our hosted sited but only if they’re on a RESERVED instance. It seems that shared instances cannot have a custom domain. It’s good that the Azure team wants to fix that and will implement CNAME for shared instances in the future though. Hooray!
So next thing for me was to try switching to reserved mode and trying to attach the domain to a CNAME. Following the explanation on Azure’s blog – I made a CNAME configuration on my domain provider. I attached to apostol-apostolov.azurewebsites.net – the app that I was testing the domains with. Then I made so all the subdomains redirect to ‘www’ and I made ‘www’ to be a CNAME pointing to apostol-apostolov.azurewebsites.net . The configuration looked like that:
And then I typed my www.apostol-apostolov.com and - well, I hit a wall.
So now what? I searched a little bit but every article out there had the explanation – make a reserved instance and point the domain with CNAME to the server alias and that is all. Maybe because I have little experience with domains I don’t know what I’m missing.
Finally after I was poking around a couple of hours I figured it out.
You have to add the domain you’re pointing to the list of hostnames of your website. And voila! You have a fully functional site with domain in the new Windows Azure.
Now I’m thinking of making a little test for a week or two on “how much will the reserved instance cost me”. And “how much will Windows Azure make my life easier on a daily basis”.