Apostol Apostolov

Practical thoughts about software

Managing RavenDB session in .NET web application

When using RavenDB in the context of a web application we should not open RavenDB's session for every operation of the database. Session should rather be opened for every Request/Response cycle(every button click for example).

A way to do that is to open RavenDB's DocumentStore in Application_Start, open a new session in Application_BeginRequest and close the session at Application_EndRequest events of your application’s Global.asax file:

In the case of ASP.NET MVC we are going to have BaseController - the one that all controllers inherit from. In the Base Controller we have:

When we need to do something and use the session in a controller that inherits from BaseController:

This way of defining and using RavenDB’s session makes the use of the session decoupled from the opening and closing of the session and also decoupled from the Document Store. The approach is very good for scenarios where we want to use the same controllers outside the scope of the current web application – for example in a Unit Testing project.

Tags:

Published at

Originally posted at

Changing the Web Hosting Plan of Windows Azure Website

I recently needed to change the Web Hosting Plan of one of my Azure Websites. I keep all my Websites in a single hosting plan so I can manage and scale them more easily. However I accidently created a new website under a new Hosting Plan and because of that the website’s scaling was managed separately from the other sites.

To be able to manage them together I needed to put them in the same hosting plan. Unfortunately Azure websites does not provide this via the management portals at the moment. The only way for now is via Azure PowerShell.

There is actually a tutorial that describes the process of changing the Web Hosting Plan but the last part of it(the one with the hash table) didn’t work so I am posting the commands that worked for me. Substitute all the parameters that are in <angle brackets> with the corresponding values.

First install Azure PowerShell. Then use the following commands to initalize:

Switch-AzureMode AzureResourceManager

Add-AzureAccount

Then select the proper subscription(optional – if we have multiple Azure subscriptions)

Select-AzureSubscription '<name of the subscription>'

Then if we want we could look at the resource group of your websites(you can find the resource group name in the new Azure Portal under Browse>Resource Groups):

Get-AzureResourceGroup <name of resource group>

Then we get the information about the website(the one that we need to move) and set it to a variable(in our case $r)

$r=Get-AzureResource -Name <name of the website> -ResourceGroupName <resource group of the website> -ResourceType Microsoft.Web/sites -ApiVersion 2014-06-08

We can see the values of $r by calling

$r

Then we set only the properties of $r to a new variable $p

$p=$r.Properties

Of Course – we could see the newly set properties with the command

$p

Then we set the properties ‘serverFarm’ and ‘webHostingPlan’ to the corresponding values that we want – the Web Hosting Plan that we need to move to. If you’re not sure what is the name of the plan that you need to move to – we could check it out with the Get-AzureResource  cmlet that we wrote above, executed against a website that is inside the desired Web Hosting Plan.

$p.serverFarm="<name of web hosting plan>";$p.webHostingPlan="<name of web hosting plan>";

And finally it’s time to overwrite the values in our website by passing $p as PropertyObject of the command

$r2=Set-AzureResource -Name <name of the website> -ResourceGroupName <resource group of the website> -ResourceType Microsoft.Web/sites -ApiVersion 2014-06-08 -PropertyObject $p

If there are no errors we could check if the state of the website’s configuration is the desired one by calling

$r2

And that’s all. Now our website is in the same Web Hosting Plan as the rest of the websites and we could manage it more easily!

Tags:

Published at

Originally posted at

Fixing large amounts of RavenDB conflicts

Some time ago I had a small issue with RavenDB master-master replication which ended up in creating 50 000 conflicted documents on a live databaseSmile  It all ended fine but I had a little hard time of finding info on how to fix that kind of issue ‘fast’. Don’t get me wrong – there is a very nice RavenDB documentation about replication and conflicts but I didn’t find a fast-and-easy solution(one that involved couple of clicks in Raven’s studio) and I was a little disappointed about that. Guess RavenDB makes you little spoiled.

Anyway, here’s my solution:

1. First, you create an RavenDB index(you can create it through the studio):

Let’s name it ConflictsIndex. This index extracts all the documents of a database in an index so you could query them.

2. Then you Execute it to see the results.

execute

Don’t worry if there are no results at first. If you have large amount of data and large amount of conflicted documents – RavenDB would need some time to index them all. During that time you could:

3. Create an Console app that fixes the conflicts of the documents.

This console app will load all the documents from the currently created index in batches of 1000(usually RavenDB has a default limit 1024 items per select so we use 1000). While loading the documents if there are any conflicts, they will be automatically fixed with the TakeNewestConflictResolutionListener. This listener chooses always the last document to be the resolved one. If you need different custom logic for resolving the conflicts – the listener is the place to insert your code.

4. Execute the console application on the database. You could wait for the ConflictsIndex to finish indexing and run the console app.  If the database is a remote one(most production databases are) – you could copy the console app on some PC that is in the LAN of the PC with the database so the process would be faster.

Also it’s good to put a default Conflict Resolution Listener in your application if you don’t have one. Just in case. If you don’t have one – every time you try to load a conflicted document  - an exception is thrown, so the situation could get pretty heated pretty fast.

And that’s it. Usually you shouldn't get to situations like that but when you do - it’s good to know how you can get out of them.

Tags:

Published at

Originally posted at

The challenges and value of using Git

Some time ago I wrote about my assessing of Using Git versus using SVN in the company that I work for.  It’s been about an year this month since we’ve successfully adopted and moved all the company’s source code to Git and GitHub in particular. I wanted to talk about the process and the issues we had in doing this. Before I begin I must say that I and probably the whole team already are pretty convinced that moving to DVCS and Git is one of the best things that we did. I helped us so much with our feature development that the problems we first had with it were quickly forgotten.

In my previous post that I mentioned above I’ve laid out the problems we had. Now I’ll explain how we work now – one year later.

The company I work for is product company with a couple of web projects(products). Our development team is about 7-8 people in total(including me) and each person is responsible for the development of 1 main feature at a time. He’s also maybe supporting other feature’s he’s built that are live or not actively developing at the moment.

For this way of development we have one main repository where we keep most of our code. The repository is a single one for all our projects(not one for each project) due to the fact that we have a couple of libraries that are shared between all the projects and the most painless scheme to support for developing/sharing the libraries between the projects is all the projects being in one repository.

That said, the way we use the repository that makes us super flexible is by having one branch for each of the features we’re developing. That gives us the flexibility that when we work on a feature, let’s call it Feature-X, we’re able to develop it on Branch-X by the developer in charge. When someone else(a colleague or a manager) needs to see what it’s going with Feature-X he could just switch the branches, load the project in Visual Studio and hit F5. There is always one ‘master’ branch where the ‘live’ version is and that give us the ability to quickly fix a bug and deploy to production without anyone having to do any extra work. It’s awesome. And super flexible.

Anyway I wanted to talk about the challenges we had when integrating Git in our work process.

The team

There is one particular challenge with core .NET developers – they don’t trust technology that is not coming from Microsoft. And Git is not a Microsoft technology. It something that is open-source  and “free” and not-to-be-trusted. For example there is no good Visual tool for working with Git, there is no good integration with Visual Studio(like a plugin), and the official GitHub For Windows app is very nice but sadly still pretty limited in its capabilities. So the best way to use and learn Git is to work with the console. Which my teammates felt like a step back rather than a step forward in our tooling.

The tooling

Frankly there are a number of nice open source Git GUI applications out there the best in my opinion is Git Extensions. But there is a big challenge with these applications. If you come from an SVN or TFS experience background and you don’t know what’s going on with Git - you could get VERY  frustrated with the visual applications. They give you all the power of all the Git commands at your fingertips but when you don’t know what the commands do… well - it becomes a mess, trust me. So the best way to learn git is by using it through a console. That way - when you need something done, you'll look up the command you need, learn what it does and then use it. And that way you’ll know Git a little bit better.

Well, to be frank - there are couple of things that are common everyday work which is Commit, Push to server, Pull from server and Switch Branches. The GitHub For Windows app that I mentioned above does these three things very well so I use a combination of that GitHub app for the basic operation along with command line for all the advanced merging, rebasing, cherry-pick etc.

The Learning Curve

Talking about merging, rebasing, cherry-pick… Well GIT is not like SVN… Git is really simple but if you don’t ‘get’ it – it become really hard. There are a lot of commands and a lot of stuff you need to understand if you don’t want to bang your head against a wall. It’ll be a lot easier if there is a person in your team(or in your company) who could explain the theory behind DVCS and graph representation of commits and branches and what is Git doing behind the scenes with those hash keys on each commit… well there’s a lot of stuff. But it’s a lot simpler than what it sounds like.

If you’re want to learn more about Git there are a couple of materials that I found are very good.

But the one that opened my eyes the most is the video Git Happens by @Jessitron. Really recommend it if you need to ‘get’ Git.

Line Endings

Ohhh… the line endings. This is one of the most frustrating things in git – dealing with the different line endings for the files in your repository. The problem here is that Unix, Mac OS and Linux operating systems use LF for denoting the end of a line, while Windows uses CLRF. That results in very frustrating situations in Git where you haven’t changed a file but nevertheless git picks it up as a changed file and wants you to commit it.

To fix the problem with line endings there is a simple fix that you must do before you make a repository and that is to setup a .gitattributes file in your repository with the proper configuration.

Here’s the .gitattributes of one of my projects:

If you need more info on the subject you can check out Mind the end of your line and the GitHub article on the subject.

The paid version

I have to note also that there are a couple of companies that offer premium DVCS for the Enterprise and help you deal with all the challenges that I talk about in here, but they come at a price – $25-$30 for person/per month. That is not so much, considering the salaries and overall expenses that a company has for a single developer, but… there are different companies and for some it’s acceptable and for some it’s not. Maybe for more developers this price adds up or some companies just don’t want to have to depend on such a company for the core of the business – the source code. But it’s good to know that there is this kind of premium solutions on the market.

Conclusion

As a conclusion I have to say that yes – transferring to a DVCS is not an easy thing. It takes time and effort to get it right. But it’s TOTALY worth it. It gives you the freedom to work as you like on the features you’re developing – use branching and merging without any fear and not have to conform with the tools you use(and you don’t need to shout in the room – “Everyone – no more commits because I’m merging Open-mouthed smile”).

I’ve gotten used to the easiness of work that much, that actually cannot imagine working in an non-DVCS environment so I strongly recommend it to every company out there. And even if your company doesn't want to transfer to a DVCS source control – go ahead and make an account at GitHub or BitBucket for your personal project. It’s a good experience and you never know when you’re going to need it or how it’s going to change your life.

Tags:

Published at

Originally posted at

А сега накъде? Проблемите в България и как да ги решим

I usually  write in English about technology and software - that’s what fascinates me.  The recent protest of the young and intelligent people that are tired of the ruling oligarchy circles in Bulgaria inspired me to write this post. In this post I offer my view and possible solutions about some of the social and political problems.

Последните няколко дни наистина се изумявам от младите хора в България, които като че ли се събуждат от зимен сън, или просто им е писнало, или както аз се чувствам - просто чашата е преляла. Изумява ме, че целта на протестите тези хора не е да си платят сметките за ток или да изгонят този или онзи – целта им е да създадат по-добро и стабилно общество, в което могат да живеят те и техните деца. Вчера докато четох някой призиви в интернет за игнориране на провокациите и пизиви като този да помогнем на полицаите като им дадем по 1 бутилка вода в ужасните жеги… вчера наистина за момент изпитах гордост, от това че съм Българин – гордост от младите интелигентни и добри хора в България – радвам се че ви има!

protests

Почти всеки човек в държавата знае какви са проблемите ни, но много малко хора дават решения за тези проблеми. Затова изключително ми допадна идеята на @peteriliev за това хора с идеи да предложат решение на проблемите на обществото под формата на блог-постове. В следващите редове ще предложа моето виждане за това къде са корените на проблемите и как може да ги решим.

  • Дезинформираност или ниска прозрачност – или как никой в обществото незнае реално какво се случва в парламента, какви закони се приемат, как се приемат те, какви проблеми решва текущата промяна на текст в закона и има ли по-добър начин да се решат тези проблеми. Освен това обществото няма идея какви договори се подписват, на какви условия, кой е взел решението за тези договори и каква е мотивацията зад това решение спрямо другите оферти.

Аз лично бях ИЗУМЕН след като служебният министър Асен Василев съобщи че държавата има 12 000 мегавата мощности от които 5000 е цялата консумация и износ от България. Как няма да има висока цена на тока като имаме 7000 мегавата нахаост за които се плаща!?! И закакво забога ни е нужно АЕЦ Белене?!?

  • Ниско участие на обществото в управлението – даже и да знаем, нищо неможем да направим. Имаме власт над това кой ще изберем да ни управлява следващите 4 години и толкова. Референдумите в сегашният им вид са тежки, бавни и скъпи(няколко милиона лева струва един референдум) и като цяло неефективни. А ако някой ни издразни значително много - излизаме на протест, сваляме го и избираме друга “тиква” на негово място.

Към това мога да прибавя мъртвите души, избирателите с ниско образование и купуването на гласове.

Вероятно има и други проблеми, но в тези според мен се коренят всички останали. В следващите редове ще дам и решенията на тези проблеми, както аз ги виждам. Може и да е затова, че съм програмист и като цяло човек на технологията, но искрено вярвам, че има много малко проблеми по света, които не могат да се решат чрез софтуер и/или иновативни нови технологии. Затова и решенията ми са с такава насоченост.

  • Всички закони да се въведат във система за контрол на версиите както са направили в Германия(тук е и акаунта на Бундестага). Самоче там са го направили неофициално(май) докато при нас може да се въведе като официално изискване. Това ще позволи всеки да може да види не само законите, но и това как тези закони са се променяли във времето. Партиите, които предвиждат да правят промени по някой закон могат да въведат промените в системата за контрол на версиите преди да се въведени тези промени за гласуване в парламента. Тези промени могат също така да се аргументират – например “как свалянето на изискването за 10 години стаж в системата би спомогнало на шефа на ДАНС да е по-ефективен”. Всичко ще е достъпно и видимо online. Така хората които са заинтересовани ще могат да ги разгледат, коментират или предложат по-добър вариант. Ако изискването стане всяка промяна на текст или решение на парламента да се въвежда в системата минимум 1 седмица преди гласуването, мисля че хората ще могат да получат доста добра представа какви решения ще бъдат взети и защо. И няма да има такива ексцесии като решения относно държавната сигурност взети за 15 минути.
  • Същото може да се направи и за договорите на НЕК и всички обществени поръчки. Решенията които се взимат за тях да са аргументирани и достъпни за всички, както и офертите.

Целта на това е да се получи пълна прозрачност на решенията, договорите, законите които се предлагат, така че хората да знаят и да се включват активно в процеса на правенето им.

  • Електронна система за гласуване и електронно правителство. Чрез електронната система за гласуване може референдумите да станат евтини, бързи и като цяло ефективни. А чрез такива референдуми хората, избирателите могат много по-лесно да участват в политическия живот.
  • Карта, парола или като цяло възможност за електронно гласуване може да се дава на граждани след предстявяне на диплома за основно(средно) образование и лична карта с навършени години.
  • Сорс кода на електронната система да се направи с отворен код(open source) така че всички да са сигурни, че няма как да се манипулира вота.

 

Като цяло в основата на предложенията ми е идеята

Щом не можем да вярваме на политиците да ни управляват - да направим така че да участваме максимално в управлението като взимаме информирани и прозрачни решения.

Не е нужно да се стига до крайности. Може да се взимат само най-важните решения с референдум. Но референдумите да станат по-чести.

Хвърлям и няколко въпроса, върху които може да се мисли:

1. Къде да събират, обсъждат и развиват идеи като моите, на други блогъри, а надявам се и на други хора, които могат да са от полза на цялото общество?

2. Кой би могъл да разпространи и вкара в парламента такива толкова “нетрадиционни” идеи? Досегашните политически лица? Едва ли…

Надявам се, че съм дал някои идеи, насоки или материал за размишление на хората които четат тези редове. Според мен има начини и идеи да се подобри положението. Просто траябва да сме активни, да имаме мнение, което да отстояваме и да не се оставяме да ни тъпчат Smile

Published at

Originally posted at

Partial import from SVN to GitHub

A while ago I wrote about how to import an existing SVN repository to GitHub. The approach I used there describes how to import a folder with its full revision history. That however may not be needed or wanted in a lot of scenarios(for example - the SVN repository being too old).  In that case we could do a partial import – import the changes made in the last year or a couple of months or the last “x” revisions of the SVN.

In that case we must do a couple of things:

  1. We must decide from which revision number on we want to import the history. You could check the revision numbers and dates in the “Show Log” option in TortoiseSVN or a similar one depending on the SVN software you use.  We’ll call the revision number BeginNum.
  2. We must find the latest revision number. We’ll call it EndNum.
  3. We must change the “svn clone” command we used in my previous post adding a “–r “ option:

git svn clone –r BeginNum:EndNum –A <path-to-users.txt> <SVN-path-to-clone>

And that is it! Git will now import all the revisions beginning from revision number BeginNum to revision number EndNum without the ones before BeginNum.

Nice!

Tags:

Published at

Originally posted at

Configuring Azure Website Domain

A while ago I wrote about how to configure Azure website’s domain. That information however is a little outdated after the publishing of the new features of Windows Azure. I’ll now explain how to configure a A-record domain name for an Azure Website.

The domain for an Azure website could be configured only if the website is either in Shared on Reserved instance in Azure. So the first thing to setup is that. Go to “Scale” tab in the website’s properties and choose “Shared” or “Reserved” box under “Web Site Mode”.

web-site-mode

After that you must go in the “Manage Domains” button in the bottom of the page  and add the domain name you want to register under “Domain Names” label.

domain-name-and-ip

From this screen you must also copy the IP Address provided under “THE IP ADDRESS TO USE WHEN CONFIGURING A RECORDS” label.

After that you must go to your Domain provider and configure a A-record key that points to the IP you’ve copied from the Azure portal. I’m using a service called DNSimple to manage my domain names which I found through a recommendation by Scott Hanselman on one of his latest podcasts. I find the service very nice. Far better from other providers I’ve used.

The key for the A Record should look like the first row in this picture:

domain-keys

Also you could configure a URL redirect(the second row from the picture above) from your base URL to your “www.” URL like I did so that if someone wants to type “http://apostol-apostolov.com” they’ll be automatically redirected to “http://www.apostol-apostolov.com” or if you want the base URL to be your main one you could configure the base URL as A record and the ”www.” URL as a redirect to you base URL.

And that is all. We’ve now configured a domain with A Record pointing to a Windows Azure Website.

Tags:

Published at

Originally posted at

The experience of Windows Azure websites

Some time ago I wrote about Windows Azure and the experience of hosting configuring an website with a real domain in Azure. The information I posted in that blog post is a little outdated so I thought to update it a bit with more information about the experience I had with Azure since then.

I am using Azure for about  5-6 months now and I’m very pleased with what I get for my money. But let me start from the beginning.

When I started using Azure the only option to hook up a real domain name to a website hosted in Azure was to have a reserved instance and also the only WAY to hookup the domain was to use a CNAME.  So to host my 2 personal apps that only I use and to host this blog meant that I have to pay around 60 €/month for 1 small reserved instance. Now that was a little high so I struggled a bit between paying that rate for a single, not much used website and having the ability to change and deploy the website in seconds.

I was half month paying for reserved instance when I when Scott Gu’s posted the new improvements for Windows Azure. The update consisted of several things, but the ones I cared were the introduction of Shared instance and the ability to hook up a domain name to Shared instance with a A-record. The Shared instance’s cost is 10€/month and the Reserved instance’s cost is around 60€/month. The trick is that with the Shared instance you pay 10€/month once-per-website and the Reserved instance you pay around 60€/month for all your websites.  So if you have more than 6 websites on a Shared instances you may as well buy one Reserved instance for all of them.

With these changes I’m now paying around 10€/month for a Shared instance for my blog and all my personal apps are on free instances(which every account has a limit of 10). So for me the cost dropped from 60€ to 10€ month, and that’s a pretty nice deal for all the features that Azure offers like easy website creation and administration,  one-click deployments from Git and Visual Studio and the one-click scaling of a website on multiple instances.

Windows Azure makes it cheap if you don’t have much traffic(Shared instance) and easy to scale when you start growing and start to become big(multiple Reserved instances). What more do you need?

Tags:

Published at

Originally posted at

ASP.NET MVC 4 bundles organization strategy

Lately I’ve been using ASP.NET MVC 4 on a couple of projects and I noticed I began having trouble organizing my JavaScript files properly. The reason being that a default MVC 4 web application comes with the default bundling strategy looking like this:

default-bundles

Now, when you see the default bundles organized by-library, your first thought is – “that’s default behavior, so it’s probably the one I should use ” and you jut go with the flow with it.

Sadly that leads to making a bundle for each library and calling @Scripts.Render method for each library that you’re using on the page. Which leads to this in production:

scripts-in-production

And if you have your own JavaScript files or add custom libraries, the list could go long pretty fast.

I think bundling should be able to combine all your files into one so we get the best possible optimization. A way of doing that is to make a bundle for each page you’re using with the necessary libraries in it.

If we do this however with the current structure of bundles in MVC 4, we wouldn’t be able to use a CDN for jQuery and the other of the common used libraries, because the CDN is declared on per-bundle basis.

A better approach would be to use different bundles for just the common libraries – jQuery, jQuery UI, Twitter Bootstrap etc. with CDN configurations. And for the custom libraries we could use a bundle for each screen with the custom libraries that each screen uses. So in your base _Layout file we could say:

bundle-includes

And in each page we make the include:

custom-bundle

Where bundles/index is a custom bundle definition for the Index.cshtml page, where you define all the custom libraries needed for the Index.cshtml page. The bundle definition could look like that:

custom-bundle-definition

That way we get both CDN support with the common libraries as well as the most effective and optimized bundle management for the custom libraries and files that change more frequently.  Win-win situation. Or almost.

The downside of this approach is that if you want to put your libraries in your personal CDN, this method of organizing bundles wouldn’t be very effective, I guess, as you’d have to make a different file in the CDN for each bundle e.g. for each page.

However if you’re not planning on using a personal CDN, this approach of bundle organization could save you a lot of confusion with dealing with all the custom JavaScript libraries and scripts you use in your pages.

Tags:

Published at

Originally posted at

Importing a simple SVN folder to GitHub

A few days ago I blogged about Why bigger teams need Git/Mercurial. Now I’ll be showing how to import a simple SVN folder to a repository in GitHub.

Disclaimer: The method I’m describing here is the very basic way of importing a normal SVN folder to a GitHub repository. There may be more advanced configurations(if you used branching and tagging) of importing your SVN folder to Git which require different “git svn” command parameters.  You can find more information about the different other options here.

The logical steps of the import are:

  1. Make a users.txt file with all the users that committed to the SVN folder.
  2. Clone the SVN repository to a local folder with “git svn”.
  3. Create a GitHub repository.
  4. Make the GitHub repository a “remote” to the cloned SVN repository.
  5. “Pull” the changes from the GitHub repository locally.
  6. “Push” the local repository with the merged changes and SVN history to GitHub.

We’ll now look at the specifics of each step.

For the import we’ll be needing a text file with a list of the users who contributed – who made the commits in the folder in SVN. The list should have the information for the username in SVN, real name of the person and the email address of the person. It should look like that:

text-file

After that we open a PowerShell in an empty folder and clone the SVN repository with the following command:

git svn clone –-no-metadata –A c:\path\to\users.txt svn://url-to-svn-server/path/to/folder

The command should look similar to this on your screen, given that you change the paths to the SVN and local users.txt file:

clone-command

Then we can open the newly created folder with

cd \local-path\to\repository

After that we must make an GitHub repository. I’ll call the one I’ll be using “Git-SVN-Clone”.  Then via the following command in PowerShell we can make a “remote” to the newly created GitHub repository so we would be able to push the current local repository with its history from SVN to GitHub. We’ll add the “remote” with the name of “server”:

git remote add server https://github.com/asapostolov/Git-SVN-Clone.git

In my case the remote repository url is “https://github.com/asapostolov/Git-SVN-Clone.git” but you can find yours in the page of you repository in GitHub:

github-repository

After we’ve added a remote named “server” we must pull(pull is “fetch and merge”) the remote’s changes(which in our case is the initial commit for creating the repository on the server) to the local folder.

git pull server master

And then we must push the local changes to the server with:

git push --set-upstream server master

And that’s all. We now have a GitHub repository with all the history of SVN commits in it. In just a few lines of commands.

Tags:

Published at

Originally posted at

Why bigger teams need Git/Mercurial?

An issue came up, a while ago, in the company I work for. The issue was that the development team was expanding. It was expanding because some of the projects got in later and more mature stages of development, which is a natural thing.

The expansion of the teams led to a interesting thing however – using SVN was becoming a pain in the ass. What do I mean by that?

Before the expansion - every project in the company had one developer who was working on it. That meant a linear development of the projects – if someone finishes a functionality, bug or feature, the project easily becomes ready-to-publish. With the introduction of more than one developer on the project - the company was able to build more than one feature per project at the same time. That process however broke the linear progression of the projects. When one person finished their feature – the second person could be half way through his, so if you need to publish the new feature you either have to wait for everyone to end their work, or hide the unfinished functionality.

I want to clarify that the way the company is using SVN is to have a folder for each project and commit your changes to that folder. A feature is a series of commits. It’s pretty standard actually and I think a lot of people use SVN that way.

In this context I began searching for solutions. And I actually found several solutions. The more interesting are:

  1. Introduction of branches in SVN – when you need to make a big feature – you make a branch and do the feature. Then you merge with the main branch.  After a bit of research on the matter however I found that this is not such a  good option.  Why? Well when merging in SVN, SVN tries to clash the two versions of the file you’re merging and combine them in one. So if someone has changed that file a little bit and you changed it  – you get a merge conflict. Now imagine a three or four weeks of work of one man trying to clash in three or four weeks of work of another. Disaster. This leads to the second option.
  2. Working with no commits. I mean you update frequently and  you commit when you’re ready with your feature. However – this means no one would be able to see your progress and no one will be able to help you with the feature. Also if you do something wrong at the end of the feature – you won’t be able to ‘revert’ your mistake. And it feels wrong too.
  3. Use a Distributed source control system(Git or Mercurial) instead of Subversion. I knew the distributed source control systems were better, but didn’t know why actually. So I began researching on the subject and found some great articles like the one from Joel Spolsky and this question in stackexchange, and a few others.

The big difference between SVN and Git\Mercurial and the solution of our problems was in the way the two source control systems worked. SVN tracks the VERSIONS of files and when merging - tries to unite the two versions you’re merging into one. Distributed source control systems like Git and Mercurial track the CHANGES in your files and when they begin merging they try to apply and combine every change you’ve made in your file with the changes the other people made to the file. That means that in theory – if you move a method and someone else changes the method contents without moving it – the source control should be able to figure that out and give you the result – the method moved and changed – without a conflict. That makes the merging of branches possible with very few conflicts. Nice. And that enables a lot of people working on the same features and changing the same files with a little effort and less pain.

It seems using a distributed source control system means you can SCALE  your development very efficiently. Very nice.

I think there may be a fourth solution to the problem – trying to implement a part of Scrum agile methodology – the Sprint. We set a target for a sprint and make changes(commits) throughout that sprint and in the end of the sprint we have a stable version of the system which we publish.  However that requires significant change in the processes of the company – of how we work, communicate, estimate and plan etc. It’s a lot more work and a lot more risk associated with this option.

After all we chose to use a distributed source control system – Git and particularly GitHub. It seems to have so much less pain associated with it compared to SVN.

However there is still the price of introducing it to the whole team, importing the source code from SVN and learning how to use Git PROPERLY. After all the source control is just a tool in the toolbox of a developer. And like every tool – if you don’t use it right it could give you a lot of headaches. But I think the whole effort will be well worth it at the end.

Tags:

Published at

Originally posted at

Visual Studio: Navigation Bar “Methods And Properties” should be a first class citizen

Lately as I’ve been developing a lot and trying to remember the key combinations of Visual Studio that I use the most, so I’d make my life easier and I’d be able to increase my productivity. I try to observe – when am I reaching for the mouse? Then I try to find the key combinations which would help me use the mouse less often. In the end – every reach for the mouse wastes time and focus.

I found that a lot of times when I use the mouse I try to reach for this menu:

navigation-bar

And naturally I tried to find the key combination to open and use it. Here comes my little disappointment. All the other key combinations I needed to find - I could find on the internet. That’s because I knew what I was looking for – the name of the window/control/functionality to open. I didn’t find the name of that component, although there are no tooltips on the dropdowns.  So I thought – let’s look around the keyboard and keyboard shortcuts options in Visual studio -  I should be able to find it there. And then I found that:

shortcuts-menu

So much for the user-friendliness.

Anyway after a couple of minutes of googgling I found the name of the dropdown with the class names and names of methods – it’s called a Navigation Bar. Naturally.

The thing that made me sad about it was that the shortcut for it is Ctrl+F2 and shortcut goes to the classes dropdown, so if I want to go to the methods dropdown I need to click also TAB. So the whole combination that I need is Ctrl+F2(which I find a little hard to reach) plus TAB. And clicking the combination resets the ‘methods and properties’ dropdown so it goes to the first element and changes the place of my cursor on the page. It’s not a great experience and not a friendly one at all.

I did this in Visual Studio 2010 some days ago and the day before yesterday I saw Visual studio 2012 with all the shiny new things. Now don’t get me wrong I really like Visual studio – 2010 and even more 2012. I really think that the teams that made them are one of the best developers in the world and I believe Visual studio is the best development environment in the world at the moment. But I was a little disappointed that the shortcuts and the options window for keyboard shortcuts are the same between Visual Studio 2010 and Visual Studio 2012.  I really think that that part of Visual Studio could be made batter.

So to be constructive I’m giving several propositions regarding keyboard shortcuts in Visual Studio:

  1. Create more user-friendly options menu for Keyboard shortcuts. Have links or pictures or at least descriptions with a nice search for each shortcut. I think that it’s going to make more users want to use shortcuts. Which will make users more happy and more productive.
  2. Make the navigation bar’s “methods and properties” dropdown a ‘first class citizen’ of Visual Studio with an own shortcut. As of now - I cannot target this dropdown directly so I should use Ctrl+F2 plus TAB to get to it.
  3. The more annoying problem -  do not reset the dropdown to the first element when I ‘TAB’ to it. The dropdown should be able to figure out in which method in the page I’m in and not reset my cursor but navigate the dropdown to the method that I’m in.

If I think of something else I’ll update the list.

I don’t think the ideas I’m proposing here are a major change. I hope the guys behind usability in the Visual Studio team would read this post and would love to think and improve on the things I’m proposing.

Or at least that’s what I would expect from a team that’s created such a great product as Visual Studio 2012.

What is Unit testing?

As a .NET developer for a couple of years, I’ve read a lot of blogs and articles on the web. A big part of them were stating how essential for an app or project is to have Unit Tests. Actually they were concordant that the definition of legacy code is code that is not being unit tested.

Imagine my amusement now when I look back at my couple of years of software development and I cannot find a single project that I worked on that has unit tests in it. Not one. I’ve worked for 3 companies on several different projects. From Business process management software to Web sites and CMS to Enterprise applications etc. I would go so far to say that I even don’t know a person who’s working in a company(I have friends who are developers), that uses unit testing in their projects. But enough of that. My purpose is not to rant about how unit testing and automated tests are not widely spread.

I want to explore the field and show the basics of unit testing, how it’s done and why it’s important.

So let’s start with the definition of Unit test.

Unit test is a small, repeatable piece of code that tests one  functionality - in most cases one method. It’s not testing how functionalities interact with each other. It’s not testing the environment, database or any other environment variables. (There are other types of tests that test for that.) It’s testing the business logic of the method – “What the method does”.  With unit tests all the external or environmental dependencies has to be hidden away -  stubbed or mocked.

But what is unit testing as a process?

Unit testing as a process essentially is automating the validation of every piece of business logic in your application through tests.

I’m sure you’re beginning to see the value in it. You can stop being scared of refactoring  some bad piece of code –small or big. Also if your piece of code is tightly coupled with some other code – it makes unit testing it very hard. Unit testing is enforcing your code to be well structured. I’m sure there are many other benefits which I’m not able to think of right now, but will come up later.

In this post I’ve made some basic definition of what unit test is and what is the idea of the process of unit testing. In my next post I’ll choose the technologies and frameworks which I’m going to make my unit tests with and I’ll also discuss why did I choose them. After that I’ll continue with some practice – creating some simple unit tests on an MVC web app.

Tags:

Published at

Originally posted at

NHibernate is a gun

What do I mean by that? Well let me give you some history.

A While back I worked on a big enterprise platform/project where we did not have much experience using NHibernate. I was learning it in the process of writing the code and for the framework and everything looked peachy. I even sometimes found ways to optimize my code to make it 10+ times faster. Like having a session-per-request so the I wouldn’t open a transaction every time I make a query to the database(if you don’t know – transactions – very expensive).

One of the problems I think now was the notion that we were not creating an application but a platform, and then we were going to make applications on top of it. So we worked a couple of months without a client and then we began making a project for a client but we decided to ship it when it was fully ready. So for a long time we had no real input of how the system was working(we had no info if it’s slow or fast or if it had production problems).

Naturally I became a fan of Ayende because of all the awesome features of NHibernate that he posted in his blog and that were buried deep down in the NHibernate code.

Then one day I found Ayende’s NHibernate Profiler. If you don’t know what it is – it’s an application that connects to NHibernate’s logs and analyzes them. It shows you common and not so common pitfalls you have in your application. If you’re using NHibernate and not using NHibernate profiler – you’re doing it wrong.

So when I ran NHibernate profiler I was Like WOW! I was astonished by the number of problems we had in the application. SELECT N+1, not using paging almost everywhere etc.

So what was the real problem in this situation? We worked with objects. We didn’t know or didn’t gave a real thought about what really happens underneath NHibernate. How the queries are made and what happens in the DB with those queries. We didn’t give a thought about the data access. After all – we have abstracted it away, haven’t we?

I left that company a while back and now I’m not using NHibernate that much(because the infrastructure in the company I work for is not using NHibernate not that I don’t like it - I think it’s awesome actually). I had forgot about this story until I saw the recent announcement of Umbraco’s decision to stop the development of their new version of the CMS. At first I didn’t quite understand why they did this. They said something about their architecture being flawed on the Keynote event in which they announced their decision. They said also that the development team’s lack of experience with NHibernate was the part of the problem and that they shouldn't have made the new version ‘in the dark’ - away from the community(away from the customers). After that I read the review Ayende did on Umbraco’s new version. And guess what? It was like a Daja Vu. Yea maybe the business was different and the applications were different and the logics and teams and decisions were different but the root cause was the same. They didn’t thought about the data access.

And now we get to the point of this article. NHibernate is a gun. It gives you power and freedom to do all kinds of things. So if you see it for the first time or/and don’t know how to use it – you could easily  point it in the wrong direction – like to your head – and you can shoot yourself with it. But if you know what NHibernate is and what it does and know what you’re doing and use it right. It will keep you safe and sound from the dangers of the jungle.

Custom domain to Windows Azure website

Disclaimer: The information in this post is a little outdated. Check out the more up-to-date information about configuring Windows Azure Websites with Domain.

Some time ago Windows Azure team posted a new 'version' to the the service.  I was and still am very thrilled about the new features they put in there. I like  the new interface very much and just love the TFS and GIT automatic deployments. Like the Azure team knew what would make the developers happy and more productive and they put it in there.

Naturally I began porting one of my sites(applications) on Windows Azure. I followed the procedure as it was described in the very detailed tutorials. If you haven’t tried it you should. It so natural and easy to use that I makes a big smile on your face.

Then I decided to point my domain to the Windows Azure website. So I searched the web to find some info and there was a good explanation in Azure’s blog. Now let me say – I’m not very good with managing domains etc.  Just a little less than a year ago I bought my first domain and made some tests with it. Never used it much as a whole.

And here was my first disappointment – we CAN point a domain to our hosted sited but only if they’re on a RESERVED instance. It seems that shared instances cannot have a custom domain.  It’s  good that the Azure team wants to fix that and will implement CNAME for shared instances in the future though. Hooray!

So next thing for me was to try switching to reserved mode and trying to attach the domain to a CNAME.  Following the explanation on Azure’s blog  – I made a CNAME configuration on my domain provider. I attached to apostol-apostolov.azurewebsites.net – the app that I was testing the domains with. Then I made so all the subdomains redirect to ‘www’ and I made  ‘www’ to be a CNAME pointing to apostol-apostolov.azurewebsites.net . The configuration looked like that:

cname

And then I typed my www.apostol-apostolov.com and  - well, I hit a wall.

server-error

So now what? I searched a little bit but every article out there had the explanation – make a reserved instance and point the domain with CNAME to the server alias and that is all. Maybe because I have little experience with domains I don’t know what I’m missing.

Finally after I was poking around a couple of hours I figured it out.

hostnames

You have to add the domain you’re pointing to the list of hostnames of your website. And voila! You have a fully functional site with domain in the new Windows Azure.

Now I’m thinking of making a little test  for a week or two on “how much will the reserved instance cost me”. And “how much will Windows Azure make my life easier on a daily basis”.

Tags:

Published at

Originally posted at