Tuesday, May 8, 2012

SQL Server Management Studio crashed!

Submit this story to DotNetKicks

Hi everybody,

Ever experienced having SQL Server Management Studio crash on you after a long day of coding and when you start it up again it doesn't offer you to recover the files you were working on? Well that just happened today and I was a bit worried when I wasn't offered with a recovery. So I googled a bit and found this handy solution: http://www.sqlservercentral.com/Forums/Topic500268-149-1.aspx

In short you can go to your document folder, enter the "SQL Server Management Studio" folder and enter "Backup Files". There should be a folder called "Solution1" and in that folder you should find all your lost work.

Now...back to work.

Wednesday, April 25, 2012

Tackling timeouts

Submit this story to DotNetKicks

Working with databases under load sooner or later leads to timeouts. Sometimes they occur for a good reason, sometimes not. Either way, you have to cope. The reason for the timeout is very rarely the wrong timeout period value.

Good reasons for timeouts: You have a long-running task, such as taking a backup, restoring some important data, or any other kind of task that simply will take time.

Bad reasons: You have a poorly written (or poorly optimized, as it often happens) query, or you are placing too much load on the server(s).

As mentioned, either way you have to cope, when it happens. But in the case of the bad reasons, you really should try and remove the reason for the timeout.

I won't dive into the universe of sql query optimizing here.

When a timeout occurs, the reason is that the server is busy for some reason, and you basically have to wait, and check again later, if it's free. It's like when a person is busy, if you keep poking them to ask if they're done yet, chances are it will just delay them. And when one timeout happens, more will follow. Many times I've seen timeouts occuring on one part of the server, related to badly written queries in a totally different part of the server.

I see sometimes people suggest to increase the timeout period. But in my opinion, unless you have a ridiculously low timeout value, this is not a solution, simply a way of brushing it under the carpet, or treating the sypmtoms instead of the disease.

If you're queuing up your queries, or processing them async, and you increase your timeout you will have more queries in the pipeline when the timeout occurs, and you'll have a bigger mess to clean up.

You have to know your domain, and set your timeouts accordingly. Shorter timeout will at some point lead to more timeouts occuring, but they will occur sooner and you will spend less time cleaning up. If your query never should take more than 5 seconds to execute, and your timeout is set to 30 seconds, that's 25 seconds spent in vain. Which may sound picky, but it all adds up...

You should also know your system well enough to be able to prioritize the different queries - if something is not important, leave it for later (on a low priority queue, for example) or even just log it and move on. In fact, if logging it and moving on is sufficient, explore the option of removing the query altogether. It sounds like it may not be that important.

If it is important, and you really need to carry out the query (which is usually the case) then you have a few options.  In both of them I'd suggest catching SqlExceptions and then try to determine if they were caused by a timeout (or a deadlock - this approach also works for them). If that is the case, let the thread sleep for a short while (to allow whatever is causing the issue to resolve), then do one of two things.

With a thread processing a queue, you could peek to get the current object, and try and carry out the operation on it. Depending on the result, you could dequeue the element - or just return, and the next time the method is invoked, it will peek the same object from the queue and try to perform the same operation again.

If you're processing data directly, like calling a stored proc or performing a LINQ-to-SQL operation, you can try and call the method recursively. Yes, it doesn't feel as good as the other approach, however it's easy to implement in a smaller solution without refactoring to the queue processing solution. (One thing to be aware of here, is that if you call the method recursively enough times, you will end up with a StackOverflowException. You can avoid this by checking the current call stack, using System.Diagnostics.StackTrace.)

In the end, if you do have a lot of problems with timeouts, and (or) you really need to be sure that your SQL work is carried out, no matter what - you need to look to Microsoft Message Queueing, or MSMQ, which is Microsoft's suggested solution. Leave the critical SQL stuff to the MSMQ handler and let your application simply be a MSMQ messenger.

Wednesday, April 11, 2012

Mysterious user access errors in an SQL database?

Submit this story to DotNetKicks

Hi,

Ever experienced some strange error messages like "The table either does not exist or the current user does not have permissions..." when trying to run queries or stored procedures on a database with a user you're sure has access? You check the user's access right to specific objects and everything seems correct? Hmmm? Well if you do, use this simple command on the database in question to see if you have an orphaned user problem:


sp_change_users_login @Action='update_one', @UserNamePattern='<database_user>', 
   @LoginName='<login_name>';

If you get some usernames returned then that means you have an orphaned user problem. This can happen sometimes when you backup from one server and restore to another, like your test server. What can happen is that if the SQL logins have different SID then the link between your sql login and your database login is lost.
So, to fix this you run this command on the database:

sp_change_users_login @Action='update_one', @UserNamePattern='<database_user>', 
   @LoginName='<login_name>';


And that's how you fix that!

Friday, September 16, 2011

Automatic web deployment from TFS build

Submit this story to DotNetKicks

We have recently started using Team Foundation Server 2010, and having a CI build running for every check-in was one of the things we were really enjoying. Then I saw Scott Hanselman's talk on web deployment - if you're using Xcopy, you're doing it wrong!

Of course, once I saw that , I  wanted an automated deployment to the development environment every night or so. To get that going, I had to do some research.

Just so you know, the project I'm talking about consists of  three web applications, one DAL and two presentation layers. Of course all three need different configurations for the different environments (dev, test and production).

Web.config transformations
And as it turns out, there is a neat function in visual studio 2010 called "web.config transformations" for exactly this purpose. You can read about it here at Scott's blog.

The most obvious example for using web.config transformations is probably database connectionstrings, but you  can replace most things using this simple syntax:

<setting name="Your_Setting_name" serializeAs="String" xdt:Transform="Replace" xdt:Locator="Match(name)">        <value>Your value here!</value>      </setting>
The web.config translates directly to the build configuration. So if you're building in "Debug" you're going to use the web.Debug.config file for transformations.  In our setup, this is the CI build. For the nightly automated deploy, we're going to need a separate target, so we created one called  Dev-Snapshot, (perhaps it would have been even more apt with Nighly-Dev - but let's ute Dev-Snapshot for the remainder of this post).

Then you need a build defintion on the TFS that uses the new build configuration. For instance, call it Dev-Snapshot like the build configuration, under Process and "Items to build", select the correct project and your newly defined build configuration. Now whenever you build using the Dev-Snapshot in Visual Studio, the original web.config will be used. Transformations are only applied when you publish - it will automagically transform the web.config using the web.Dev-Snapshot.config into the final web.config inside the deployment package.

Publish settings
Now we have a working build of the development environment, tailored for deployment on our development server.  So how do we do the automatic deployment? Maybe there's a switch in TFS Build for it? No such luck.

The first thing you need to look at is your web project's publish settings. They too relate to the build configuration, so make sure you have selected the right one - again, for instance, Dev-Snapshot.

Most interesting bits: Under items to deploy, select "only files needed to run the application" - no .cs files, .csproj etc are not included in the package, just the "necessary files" (more on that later).

You can choose to include database settings (out of scope for this post) and run any setup or change scripts directly. However, this probably fits best for any single-server solution, where you have one database server and one web server - in a web farm, you pretty much need to update all nodes whenever you change the db, though I suspect you might be able to run the db package only for the first node or something similar.

Under Web Deployment Package settings you may choose to package as a zip file. I think this is useful. You need to specify where to create the package, and the name of the web site. This name must match the actual name of the web site you're planning to update on the server.

After these settings are set, you may choose to test that they are actually working. Do this by rightclicking the project and select "Publish". What the TFS will do later on is actually build a deployment package - which you can do by right-clicking the project and select "Build deployment package". Then go look in the destination you provided in the settings - you should have some files there.

Read more about this in Scott Gu's blog post.

Deploying from TFS
Now that we have a working web.config transformation and web publish settings, the stage is set for the TFS Build server (and MSBuild in the background). In Team Explorer, go to Builds and right-click the one you're working with. Then go to the Process section, and under "Items to build" make sure you have the correct project with the correct configuration selected (in this example - Dev-Snapshot configuration).

Open the "3. Advanced" section and go down to MSBuild arguments. Enter
/p:DeployOnBuild=True /p:IsAutoBuild=True
The first one tells MSBuild to build a deployment package after the build has completed, the second tells TFS that this is an automated build, as opposed to a manually triggered one.

If you try queueing this build now, it should be working fine. You can open the build drop location to see the deployment package. But there's one step missing  - the deployment itself! /p:DeployOnBuild=True  as mentioned, only tells MSbuild to create the deployment package. It does not run the deployment script. That's not really automatic deployment...

So what I found out is that you can use a post-build event to call the script. The post-build event is found by right-clicking the project and selecting Properties, then Build events. Surprise, surprise, the Build Events are common to all configurations! which means, you can't simply run the script here, you have to run it according to the configuration. Or else, you could end up with any build ending up on your production server. Bad news.

Enter a condition that will be true for your build - $(ConfigurationName) is a build variable that matches the Build configuration - and then create the command needed to execute the script:
if "$(ConfigurationName)" == "Dev-snapshot" "$(TargetDir)_PublishedWebsites\Projectname_Package\Projectname.deploy.cmd" /Y /u:publishuser /p:secret! /M:yourwebserver
Of course, your configuration will vary from this, but the basic idea should be clear, and while a little messy this can be extended to as many builds as you like, just keep on adding conditions.

If you have some common settings for some builds - let's say your development and test environment has some common tasks - you can distinguish them with the IsAutoBuild parameter - $IsAutoBuild == 'True'.

Now, if you queue up this build, you should actually have a ready published web site. Or a failed build. Make sure you have installed the web deployment package on the remote server, opened the correct port in the firewall and that the service is indeed running! Details here.

Missing files?
Well the deployment is now working - new files are appearing on your web server when you build. But in my case there were at least some files that weren't appearing. Also there were some empty directories, that would be populated at runtime, suddenly missing. Strange.

So you may remember that setting for "Only files needed to run the application". It determines by itself what is needed. Turns out, .class files, .pdf's, .zip and .csv files are "not needed". In addition, empty directories are not included in the deployment. It actually half makes sense, though there should be some way for the developer to specify the details.

However, there is not in the GUI - but there is in the build process, if you use the build targets inside the project file! Sam Stephens has a blog post on this. You can either add this directly to your project file, or you can create a separate .target file for tidyness. A word of caution for the latter, while more tidy, it requires you to close and reopen the solution (!) for changes to be effective. Inside the project file you "only" need to unload and reload the project.

Here's the target configuration for the missing files in my project (the \**\ just means all directories, recursively):


   <PropertyGroup>
    <CopyAllFilesToSingleFolderForPackageDependsOn>
      CustomCollectClassFiles;
      $(CopyAllFilesToSingleFolderForPackageDependsOn);
    </CopyAllFilesToSingleFolderForPackageDependsOn>
  </PropertyGroup>


<Target Name="CustomCollectClassFiles">
    <ItemGroup>
      <_CustomClassFilesForRootFolder Include=".\**\*.class;.\**\*.zip;.\**\*.csv;.\**\*.pdf">
        <DestinationRelativePath>%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
      </_CustomClassFilesForRootFolder>
      <FilesForPackagingFromProject Include="%(_CustomClassFilesForRootFolder.Identity)">
        <DestinationRelativePath>.\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
      </FilesForPackagingFromProject>
    </ItemGroup>
  </Target>


That covers the missing files, but the empty folders need another target. Or, you can simply add an empty file in the folder for it to be picked up, but that is really a hack.

I found a better way of doing it through the AfterAddIisSettingAndFileContentsToSourceManifest - a target that is defined by the web deployment publishing pipeline. It's kind of hard to find a complete list of the targets involved, but feel free add one in the comments.


<PropertyGroup>
    <AfterAddIisSettingAndFileContentsToSourceManifest>
      MakeEmptyFolders
    </AfterAddIisSettingAndFileContentsToSourceManifest>
  </PropertyGroup>


<Target Name="MakeEmptyFolders">
    <Message Text="Adding empty folder to hold snapshots...$(_MSDeployDirPath_FullPath)\SnapshotImages" />
    <MakeDir Directories="$(_MSDeployDirPath_FullPath)\SnapshotImages"/>
  </Target>



There. Missing files and folders no more.

Minifying javascript Build-Time
Since this is a web application, I will add a final note about the minifying task. I used the Ajax Minify from Microsoft, but I'm sure the same applies to the other tools as well, just with a different command line. Before I started on the automatic deployment project, we were using the AfterBuild target for minify operations, which worked out well, when a batch script (using robocopy!) was doing the deployment.

But when the deployment package is built, the AfterBuild target is not yet entered. So we switched to BeforeBuild to get around that - and it works perfectly. Unless you are emitting javascript files from your build, the same approach should be fine for you.

Here's what happens: First, delete the old files. Then, concatenate any separate files into one big .js file. Then, minify that file. In addition, you obviously have to reference that minified .js from your master page or what ever other page you need it in. I'd also suggest renaming the .js file for each version to avoid caching problems, but that's another story.



<Target Name="BeforeBuild">
     <!-- we need to minify the .js files before build, because they must be included in the deployment package -->
    <Message Text="Going to delete old JS files..." />
    <Delete Files=".\Controls\Javascript\Concatenated.js;.\Controls\Javascript\Concatenated.min.js" />
    <Message Text="Concatenating JavaScript files..." />
    <ItemGroup>
      <InFiles Include=".\Controls\Javascript\*.js" />     
      <InFiles Include=".\Controls\JsFolder\*.js" />
    </ItemGroup>
    <ReadLinesFromFile File="%(InFiles.Identity)">
      <Output TaskParameter="Lines" ItemName="lines" />
    </ReadLinesFromFile>
    <WriteLinesToFile File=".\Controls\Javascript\Concatenated.js" Lines="@(Lines)" Overwrite="true" />
    <Message Text="Minifying JavaScript files..." />
    <ItemGroup>
      <JS Include=".\Controls\Javascript\Concatenated.js" />
    </ItemGroup>
    
    <!-- Minify javascript files in .\Controls\Javascript -->
    <AjaxMin SourceFiles="@(JS)" SourceExtensionPattern="\.js$" TargetExtension=".min.js" CollapseToLiteral="True" LocalRenaming="CrunchAll" OutputMode="SingleLine" RemoveUnneededCode="True" StripDebugStatements="True" EvalsAreSafe="True" InlineSafeSettings="True" CombineDuplicateLiterals="True" CatchAsLocal="True" />
  </Target>





Monday, August 1, 2011

Wake On Lan..not so easy.

Submit this story to DotNetKicks


Hi, just a quick tutorial on how to make you pc wake up remotely
from sleep mode. One would think that this is just one click
operation, but no:

NB: Mind you this is a ASUS P5Q deluxe motherboard- sure
settings are different on other mb.

1. BIOS: you need to enable wake on LAN or as in my case:
Power- APM configuration - Power on by PCI = Enabled
Advanced - Marvell LAN 2- LAN Boot Rom = Enabled

2. Win7 :
Control panel- Network and Sharing Center- Change adapter settings:
Right click on the "Local Area Connection" in use, then click configure.
Choose the Advance tab: "Energy Star" =Disabled,
"Wake From Shutdown" = On,
Power Management tab: All of the checkboxes should be checked.



3. Install a program which sends the magic packet.
simple, but does the job.

PS: there is also an Android app for this at Android Market.
NB -When using the phone you have to establish a VPN
connection before using the WOL application.

4. Now you can use Remote desktop or something similar to connect to your pc.

Friday, June 3, 2011

Entiy Framework Add Function Import - Get Column Information returns nothing..

Submit this story to DotNetKicks

I needed to update a sql stored procedure in our entity data model. In the Model Browser I clicked "Update Model from Database" and then double clicked on the SP under functions import to open the "Edit function import" wizard so I could make the complex return type from the SP. To my surprice the import wizard told me that "The selected stored procedure returns no columns"? I knew for a fact that the SP returned several rows.

After a little Google searching I found a solution in this forum post from "Brian”: Microsoft forum

Quote: "

it seems Entity Framework will try to get the columns by executing your Stored Procedure, passing NULL for every argument. You can see this by putting a trace on your SQL Server.

So - first make sure your S_P will return something under these circumstances. Note it may have been smarter for Entity Framework to execute the Stored Proc with Default Values for arguments, as opposed to NULLS. Never mind - nothing we can do about that!

However - before trying to run the Stored Procedure, ER does this

SET FMTONLY ON

This will break your stored procedure in various circumstances, in particular, if it uses a temp table.

So, add to the start of your Stored Procedure:

SET FMTONLY OFF;

This worked for me - hope it works for you too.

brian

"
One other work around or maybe a better solution is to not use temp tables at all, if you don’t need too index those tables that is.

If you use table variables, the SET FMTONLY ON; option will not break in your stored procedure. Performance wise, except indexing option on temp tables, they seem quite the same.

Ref this article:


Wednesday, December 22, 2010

Cheat Sheets from AddedBytes

Submit this story to DotNetKicks

I don't work with regex every day, the same applies to many developers I guess.


So I was very happy when I found the excellent regex cheat sheets from IloveJackDaniels.com. But here one day it turned out that page was gone - so to save anyone else the trouble, they have moved to addedbytes.com.

And here's a link to their cheat sheets:

You'll find two versions of Regular Expression cheat sheets, along with CSS, Python, Javascript and more.