It's common for .NET solutions to have multiple projects, for example an API and a UI.Did you know Microsoft Visual Studio and Jetbrains Rider allow you to start as many projects as you want with a single click?
❌ Split Terminals
You can run each project in a seperate terminal using dotnet run, but this will quickly become hard to manage the more projects you need to run.
Figure: Multiple Terminals
❌ Manually Launching in IDE
You could also manually select and launch each project in your IDE, but this will result in a lot of clicking and waiting.It can also be error prone as you may forget to launch a project.
Figure: Manually selecting and launching each project
✅ Setting Multiple Startup Projects
You can set multiple startup projects in Visual Studio and Rider, this will allow you to launch all your projects with a single click.
Note: If you change the launch profile Visual Studio will not save your configuration and you will have to follow the above steps again.
Note: Rider will save the launch profile you just created, you can switch between launch profiles without losing your configuration.
It's common for .NET solutions to have multiple projects, for example an API and a UI.
Did you know Microsoft Visual Studio and Jetbrains Rider allow you to start as many projects as you want with a single click?
Figure: A Dependency Injection based architecture gives us great maintainability
Figure: Good Example - The Solution and Projects are named consistently and the Solution Folders organize the projects so that they follow the Onion Architecture
Dependencies and the application core are clearly separated as per the Onion Architecture.
The References solution folder - to hold any 3rd party assemblies that are not available via NuGet
Common Library projects are named [Company].[AssemblyName].
E.g. BCE.Logging is a shared project between all solutions at company BCE.
Other projects are named [Company].[Solution Name].[AssemblyName].
E.g. BCE.Sparrow.Business is the Business layer assembly for company ‘BCE’, solution ‘Sparrow’.
We have separated the unit tests, one for each project, for several reasons:
It provides a clear separation of concerns and allows each component to be individually tested
The different libraries can be used on other projects with confidence as there are a set of tests around them
All the DLL references and files needed to create a setup.exe should be included in your solution. However, just including them as solution items is not enough, they will look very disordered (especially when you have a lot of solution items). And from the screenshot below, you might be wondering what the _Instructions.docx is used for...
Bad example - An unstructured solution folder
An ideal way is to create "sub-solution folders" for the solution items, the common ones are "References" and "Setup". This will make your solution items look neat and in order. Look at the screenshot below, now it makes sense, we know that the _Instructions.docx contains the instructions of what to do when creating a setup.exe.
Good example - A well structured solution folder has 2 folders - "References" and "Setup"
We have a program called SSW Code Auditor to check for this rule.
All the DLL references and files needed to create a setup.exe should be included in your solution. However, just including them as solution items is not enough, they will look very disordered (especially when you have a lot of solution items). And from the screenshot below, you might be wondering what the _Instructions.docx is used for...
When programming in a Dot Net environment it is a good practice to remove the default imports that aren't used frequently in your code.
This is because IntelliSense lists will be harder to use and navigate with too many imports. For example if in VB.NET, Microsoft.VisualBasic would be a good item to have in the imports list, because it will be used in most areas of your application.
To remove all the default imports, load Project Property page and select Common properties - Imports.
Figure: Using aliases with the Imports Statement
The Import statement makes it easier to access methods of classes by eliminating the need to explicitly type the fully qualified reference names. Aliases let you assign a friendlier name to just one part of a namespace.
For example, the carriage return-line feed sequence that causes a single piece of text to be displayed on multiple lines is part of the ControlChars class in the Microsoft.VisualBasic namespace. To use this constant in a program without an alias, you would need to type the following code:
MsgBox("Some text" & Microsoft.VisualBasic.ControlChars.crlf _ & "Some more text")
Imports statements must always be the first lines immediately following any Option statements in a module. The following code fragment shows how to import and assign an alias to the Microsoft.VisualBasic.ControlChars namespace:
Future references to this namespace can be considerably shorter:
MsgBox("Some text" & CtrlChrs.crlf & "Some more text")
If an Imports statement does not include an alias name, elements defined within the imported namespace can be used in the module without qualification. If the alias name is specified, it must be used as a qualifier for names contained within that namespace.
When programming in a Dot Net environment it is a good practice to remove the default imports that aren't used frequently in your code.
This is because IntelliSense lists will be harder to use and navigate with too many imports. For example if in VB.NET, Microsoft.VisualBasic would be a good item to have in the imports list, because it will be used in most areas of your application.
The designer should be used for all GUI design. Controls will be dragged and dropped onto the form and all properties should be set in the designer, e.g.
Labels, TextBoxes and other visual elements
ErrorProviders
DataSets (to allow data binding in the designer)
Things that do not belong in the designer:
Connections
Commands
DataAdapters
However, and DataAdapter objects should not be dragged onto forms, as they belong in the business tier. Strongly typed DataSet objects should be in the designer as they are simply passed to the business layer. Avoid writing code for properties that can be set in the designer.
Figure: Bad example - Connection and Command objects in the Designer
Good example - Only visual elements in the designer
The designer should be used for all GUI design. Controls will be dragged and dropped onto the form and all properties should be set in the designer, e.g.
There are many ways to reference images in ASP.NET. There are 2 different situations commonly encountered by developers when working with images:
Scenario #1: Images that are part of the content of a specific page. E.g. A picture used only on 1 page
Scenario #2: Images that are shared across on user controls which are shared across different pages in a site. E.g. A shared logo used across the site (commonly in user controls, or master pages)
Each of these situations requires a different referencing method.
Option #1: Root-Relative Paths
Often developers reference all images by using an root-relative path (prefixing the path with a slash, which refers to the root of the site), as shown below.
<imgsrc="/Images/spacer.jpg"/>
Bad example - Referencing images with absolute paths
This has the advantage that <img> tags can easily be copied between pages, however it should not be used in either situation, because it requires that the website have its own site IIS and be placed in the root (not just an application), or that the entire site be in a subfolder on the production web server. For example, the following combinations of URLs are possible with this approach:
Staging Server URL
Production Server URL
bee:81/
www.northwind.com.au
bee/northwind/
www.northwind.com.au/northwind
As shown above, this approach makes the URLs on the staging server hard to remember, or increases the length of URLs on the production web server.
Option #2: Relative Paths
Images that are part of the content of a page should be referenced using relative paths.
<imgsrc="../Images/spacer.jpg"/>
Good example - Referencing images with relative paths
However, this approach is not possible with images on user controls, because the relative paths will map to the wrong location if the user control is in a different folder to the page.
Option #3: Application-Relative Paths
In order to simplify URLs, ASP.NET introduced a new feature, application relative paths. By placing a tilde (~) in front of a path, a URL can refer to the root of a site, not just the root of the web server. However, this only works on Server Controls (controls with a runat="server" attribute).
To use this feature, you need either use ASP.NET Server controls or HTML Server controls, as shown below.
Good example - Application-relative paths with an ASP.NET Server control
Using an HTML Server control creates less overhead than an ASP.NET Server control, but the control does not dynamically adapt its rendering to the user's browser, or provide such a rich set of server-side features.
Note: A variation on this approach involves calling the Page.ResolveUrl method with inline code to place the correct path in a non-server tag.
Bad example - Page.ResolveUrl method with a non-server tag
This approach is not recommended, because the data binding will create overhead and affect caching of the page. The inline code is also ugly and does not get compiled, making it easy to accidentally introduce syntax errors.
There are many ways to reference images in ASP.NET. There are 2 different situations commonly encountered by developers when working with images:
The Microsoft.VisualBasic library is provided to ease the implementation of the VB.NET language itself. For VB.NET, it provides some methods familiar to the VB developers and can be seen as a helper library. It is a core part of the .NET redistribution and maps common VB syntax to framework equivalents, without it some of the code may seem foreign to VB programmers.
Microsoft.VisualBasic
.NET Framework
CInt, CStr
Convert.ToInt(...), ToString()
vbCrLf
Environment.NewLine, or "\r\n"
MsgBox
MessageBox.Show(...)
The Microsoft.VisualBasic library is provided to ease the implementation of the VB.NET language itself. For VB.NET, it provides some methods familiar to the VB developers and can be seen as a helper library. It is a core part of the .NET redistribution and maps common VB syntax to framework equivalents, without it some of the code may seem foreign to VB programmers.
This is where you should focus your efforts on eliminating whatever VB6 baggage your programs or developer habits may carry forward into VB.NET. There are better framework options for performing the same functions provided by the compatibility library You should heed this warning from the VS.NET help file: Caution: It is not recommended that you use the VisualBasic.Compatibility namespace for new development in Visual Basic .NET. This namespace may not be supported in future versions of Visual Basic. Use equivalent functions or objects from other .NET namespaces instead.? ad.?
Avoid:
InputBox
ControlArray
ADO support in Microsoft.VisualBasic.Compatibility.Data
Environment functions
Font conversions
This is where you should focus your efforts on eliminating whatever VB6 baggage your programs or developer habits may carry forward into VB.NET. There are better framework options for performing the same functions provided by the compatibility library You should heed this warning from the VS.NET help file: Caution: It is not recommended that you use the VisualBasic.Compatibility namespace for new development in Visual Basic .NET. This namespace may not be supported in future versions of Visual Basic. Use equivalent functions or objects from other .NET namespaces instead.? ad.?
Incrementally as we do more and more .NET projects, we discover that we are re-doing a lot of things we've done in other projects. How do I get a value from the config file? How do I write it back? How do I handle all my uncaught exceptions globally and what do I do with them?
Corresponding with Microsoft's release of their application blocks, we've also started to build components and share them across projects.
Sharing a binary file with SourceSafe isn't a breeze to do, and here are the steps you need to take. It can be a bit daunting at first.
As the component developer, there are four steps:
In Visual Studio.NET, Switch to release build
Build Release Figure: Switch to release configuration
In your project properties, make sure the release configuration goes to the bin\Release? folder. While you are here, also make sure XML docs are generated. Use the same name as your dll but change the extension to .xml (eg. for SSW.Framework.Configuration.dll -> add SSW.Framework.Configuration.xml)
Build Project Property Figure: Project properties Note: The following examples are considered being used for C#. Visual Basic, by default, does not have \bin\Release and \bin\Debug which means that the debug and release builds will overwrite each other unless the default settings are changed to match C# (recommended). VB does not support XML comments either, please wait for the next release of Visual Studio (Whidbey).
Change to C# Figure: Force change to match C#
If this is the first time, include/check-in the release directory into your SourceSafe
Build Include Figure: Include the bin\Release directory into source safe
Make sure everythings checked-in properly. When you build new versions, switch to Release?mode and checkout the release dlls, overwrite them, and when you check them back in they will be the new dll shared by other applications.
If the component is part of a set of components, located in a solution, with some dependency between them. You need to check out ALL the bin\Release folders for all projects in that solution and do a build. Then check in all of them. This will ensure dependencies between these components don't conflict with projects that reference this component set.
In other words, a set of components such as SSW.Framework.WindowsUI.xxx, increment versions AS A WHOLE. One component in this set changes will cause the whole set to re-establish internal references with each other.
Incrementally as we do more and more .NET projects, we discover that we are re-doing a lot of things we've done in other projects. How do I get a value from the config file? How do I write it back? How do I handle all my uncaught exceptions globally and what do I do with them?
You should use the SharePoint portal in VSTS2012 because it provides you dashboards to monitor your projects as well as quick access to a lot of reports. You are able to create and edit work items via the portal as well.
Figure: SharePoint portal in VSTS 2012
You should use the SharePoint portal in VSTS2012 because it provides you dashboards to monitor your projects as well as quick access to a lot of reports. You are able to create and edit work items via the portal as well.
Figure: Keep these two versions consistent If you are not using the GAC, it is important to keep AssemblyVersion, AssemblyFileVersion and AssemblyInformationalVersionAttribute the same, otherwise it can lead to support and maintenance nightmares. By default these version values are defined in the AssemblyInfo file. In the following examples, the first line is the version of the assembly and the second line is the actual version display in file properties.
Bad example - AssemblyFileVersion and AssemblyInformationalVersion don't support the asterisk (*) character
If you use an asterisk in the AssemblyVersion, the version will be generated as described in the MSDN documentation.
If you use an asterisk in the AssemblyFileVersion, you will see a warning, and the asterisk will be replaced with zeroes. If you use an asterisk in the AssemblyInformationVersion, the asterisk will be stored, as this version property is stored as a string.
Figure: Warning when you use an asterisk in the AssemblyFileVersion
Good example - MSBuild will automatically set the Assembly version on build (when not using the GAC)
Having MSBuild or Visual Studio automatically set the AssemblyVersion on build can be useful if you don't have a build server configured.
If you are using the GAC, you should adopt a single AssemblyVersion and AssemblyInformationalVersionAttribute and update the AssemblyFileVerison with each build.
Good example - The best way for Assembly versioning (when using the GAC)
If you're working with SharePoint farm solutions (2007, 2010, or 2013), in most circumstances the assemblies in your SharePoint WSPs will be deployed to the GAC. For this reason development is much easier if you don't change your AssemblyVersion, and increment your AssemblyFileVersion instead.
The AssemblyInformationalVersion stores the product name as marketed to consumers. For example for Microsoft Office, this would be "Microsoft Office 2013", while the AssemblyVersion would be 15.0.0.0, and the AssemblyFileVersion is incremented as patches and updates are released.
Figure: Keep these two versions consistent If you are not using the GAC, it is important to keep AssemblyVersion, AssemblyFileVersion and AssemblyInformationalVersionAttribute the same, otherwise it can lead to support and maintenance nightmares. By default these version values are defined in the AssemblyInfo file. In the following examples, the first line is the version of the assembly and the second line is the actual version display in file properties.
How do you get a setting from a configuration file? What do you do when you want to get a setting from a registry, or a database? Everyone faces these problems, and most people come up with their own solution. We used to have a few different standards, but when Microsoft released the Configuration Application Blocks, we have found that working to extend it and use it in all our projects saves us a lot of time! Use a local configuration file for machine and/or user specific settings (such as a connection string), and use a database for any shared values such as Tax Rates.
See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit
How do you get a setting from a configuration file? What do you do when you want to get a setting from a registry, or a database? Everyone faces these problems, and most people come up with their own solution. We used to have a few different standards, but when Microsoft released the Configuration Application Blocks, we have found that working to extend it and use it in all our projects saves us a lot of time! Use a local configuration file for machine and/or user specific settings (such as a connection string), and use a database for any shared values such as Tax Rates.
See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit
In almost every application we have a user settings file to store the state of the application. We want to be able to reset the settings if anything goes wrong.
See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit
In almost every application we have a user settings file to store the state of the application. We want to be able to reset the settings if anything goes wrong.
See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit
It is good to store program settings in an .xml file. But developers rarely worry about future schema changes and how they will inform the user it is an old schema.
The version tags identifies what version the file is. This version should be hard coded into the application. Every time you change the format of the file, you would increment this number.
The code below shows how this would be implemented in your project.
Public Function IsXMLFileValid() As BooleanDim fileVersion As String = "not specified"Dim dsSettings As New DataSetDim IsMalformed As Boolean = False' Is the file malformed all together with possibly version Try dsSettings.ReadXml(mXMLFileInfo.FullName, XmlReadMode.ReadSchema) Catch ex As ExceptionIsMalformed = TrueEnd TryIf (Not IsMalformed) ThenDim strm As Stream = Asm.GetManifestResourceStream(Asm.GetName().Name _ + "." + "XMLFileSchema.xsd") Dim sReader As New StreamReader(strm) Dim dsXMLSchema As New DataSet dsXMLSchema.ReadXmlSchema(sReader)If dsSettings.Tables(0).Columns.Contains("Version") Then _fileVersion = dsSettings.Tables(0).Rows(0)("Version").ToStringEnd IfIffileVersion = ""ThenfileVersion = "not specified"End IfIffileVersion = Global.XMLFileVersion AndAlso Not dsSettings.GetXmlSchema() = dsXMLSchema.GetXmlSchema() ThenReturn FalseEnd IfEnd IfIf IsMalformed OrElse fileVersion <> Global.XMLFileVersion ThenIf mshouldConvertFile Then' Convert the fileConvertToCurrentVersion(IsMalformed)Else Throw New XMLFileVersionException(fileVersion, Global.XMLFileVersion )End IfEnd IfReturn TrueEnd Function
Figure: Code to illustrate how to check if the xml file is valid
Note: to allow backward compatibility, you should give the user an option to convert old xml files into the new version structure.
It is good to store program settings in an .xml file. But developers rarely worry about future schema changes and how they will inform the user it is an old schema.
Both controls can represent XML hierarchical data and support Extensible Stylesheet Language (XSL) templates, which can be used to transform an XML file into a the correct format and structure. While TreeView can apply Styles more easily, provide special properties that simplify the customization of the appearance of elements based on their current state.
Figure: Good example - Use TreeView to represent XML hierarchical data
Both controls can represent XML hierarchical data and support Extensible Stylesheet Language (XSL) templates, which can be used to transform an XML file into a the correct format and structure. While TreeView can apply Styles more easily, provide special properties that simplify the customization of the appearance of elements based on their current state.
There are three types of settings files that we may need to use in .NET :
App.Config/Web.Config is the default .NET settings file, including any settings for the Microsoft Application Blocks (eg. the Exception Management Block and the Configuration Management Block). These are for settings that dont change from within the application. In addition, System.Configuration classes dont allow writing to this file.
ToolsOptions.Config (an SSW standard) is the file to hold the users own settings, that are users can change in the Tools - Options.
Eg. ConnectionString, EmailTo, EmailCC
Note: We read and write to this using Microsoft Configuration Application Block. If we don't use this Block we would store it as a plain XML file and read and write to it using System.XML classes. The idea is that if something does go wrong when you are writing to this file, at least the App.Config would not be affected. Also, this separates our settings (which are few) from the App.Config (which usually has a lot of stuff that we really dont want a user to stuff around with).
UserSession.Config (an SSW standard). These are for additional setting files that the user cannot change.
e.g. FormLocation, LastReportSelected
Note: This file is over writable (say during a re-installation) and it will not affect the user if the file is deleted.
There are three types of settings files that we may need to use in .NET :
Windows Communication Foundation (WCF) extends .NET Framework to enable building secure, reliable & interoperable Web Services.
WCF demonstrated interoperability with using the Web Services Security (WSS) including UsernameToken over SSL, UsernameToken for X509 Certificate and X509 Mutual Certificate profiles.
WSE has been outdated and replaced by WCF and has provided its own set of attributes that can be plugged into any Web Service application.
Security
Implementation of security at the message layer security has several policies that can suite any environment including: 1. Windows Token 2. UserName Token 3. Kerbose Token 4. X.509 Certificate Token
It is recommended to implement UserName Token using the standard login screen that prompts for a Username and a Password, which then gets passed into the SOAP header (at message level) for authorization.This requires SSL which provides a secure tunnel from client to server.However, message layer securtiy does not provide authentication security, so it does not stop the ability for a determined hacker to try username / password attempts forever. Custom Policies setup at Application Level can to prevent brute force.
Performance
Indigo has got the smarts to negotiate to the most performant serialization and transport protocol that either side of the WS conversation can accommodate, so it will have the best performance having "all-things-being-equal". You can configure the web services SSL session simply in the web.config file.
After having Configure an SSL certificate (in the LocalMachine store of the server), the following lines are required in the web.config:
Figure: Setting the SSL to Web Service for Message Layer Security
Windows Communication Foundation (WCF) extends .NET Framework to enable building secure, reliable & interoperable Web Services.
WCF demonstrated interoperability with using the Web Services Security (WSS) including UsernameToken over SSL, UsernameToken for X509 Certificate and X509 Mutual Certificate profiles.
Did you know if you are using DataSets throughout your application (not data readers) then you don't need to have any code about connection opening or closing.
Some say it is better to be explicit. However the bottom line is less code is less bugs.
try{cnn.Open();adapter.Fill(dataset);}catch (SQLExceptionex){MessageBox.Show(ex.Message);}finally{//I'm in the finally block so that I always get called even if the fill fails.cnn.Close();}
Good code: Letting the adapter worry about the connection
Note: A common comment for this rule is... "Please tell users to explicitly open and close connection - even when the .NET Framework can do for them"
The developers who prefer the first (more explicit) code example give the following reasons:
Explicit Behaviour is always better. Code maintainability. Explicit code is more understandable than implicit code. Don't make your other developers have to look up the fact that data adapters automatically maintain the state of your connection for them.
Consistency (or a lack of) - not all Framework classes are documented to behave like this. For example, the IDBCommand.ExecuteNonQuery() will throw an exception if the connection isn't open (it might be an interface method, but interface exceptions are documented as a strong guideline for all implementers to follow). The SqlCommand help doesn't mention anything further about this fact, but considering it's an inherited class, it would be fair to expect it to behave the same way. A number of the other methods don't make mention of connection state, making it difficult to know which basket to put your eggs into...
Developer Awareness - it's healthy for the developer to be aware that they have a resource that needs to be handled properly. If they learn that they don't need to open and close connections here, then when they move onto using other resource types where this isn't the case then many errors may be produced. For example, when using file resources, the developer is likely to need to pass and open stream and needs to remember to close any such streams properly before leaving the function.
Efficiency (sort of) - In a lot of code it will often populate more than one object at a time so that if I only open the connection once, execute multiple fills or commands, then close, then it'll be more clear about what the intent of the developer. If we left it to the framework, it's likely that the connection will be opened and closed multiple times; which despite it being really cheap to open out of the connection pool it will be slightly (itty bitty bit) more efficient but I think the explicit commands will demonstrate more clearly the intention of the developer.
Bottom line - It is a controversial one. People who agree with the rule include:
One final note: This argument is a waste of time... With code generators developing most of the Data Access layer of the application, the errors, if any, will be long gone and the developer is presented with higher level of abstraction that allows him/her to concentrate on more important things rather than mucking around with connections. Particularly considering that, when we start using the Provider model from Whidbey, it won't even be clear whether you're talking to SQL Server or to an XML file.
Did you know if you are using DataSets throughout your application (not data readers) then you don't need to have any code about connection opening or closing.
Some say it is better to be explicit. However the bottom line is less code is less bugs.
Each class definition should live in its own file. This ensures it's easy to locate class definitions outside the Visual Studio IDE (e.g. SourceSafe, Windows Explorer)
The only exception should be classes that collectively forms one atomic unit of reuse should live in one file.
Each class definition should live in its own file. This ensures it's easy to locate class definitions outside the Visual Studio IDE (e.g. SourceSafe, Windows Explorer)
Many applications end up working perfectly on the developer's machine. However once the application is deployed into a setup package and ready for the public, the application could suddenly give the user the most horrible experience of his life. There are plenty of issues that developers don't take into consideration. Amongst the many issues, 3 can stand above the rest if the application isn't tested thoroughly:
The SQL Server Database or the Server machine cannot be accessed by the user, and so developer settings are completely useless to the user.
The user doesn't install the application in the default location. (i.e. instead of C:\Program Files\ApplicationName, the user could install it on D:\Temp\ApplicationName)
The developer has assumed that certain application dependencies are installed on the user's machine. (i.e. MDAC; IIS; a particular version of MS Access; or SQL Server runtime components like sqldmo.dll)
To prevent issues from arising and having to re-deploy continuously which would only result in embarrassing yourself and the company, there are certain procedures to follow to make sure you give the user a smooth experience when installing your application.
Have scripts that can get the pathname of the .exe that the user has installed the application on
Wise has a Dialog that prompts the user for the installation directory:
Figure: Wise Prompts the user for the installation directory and sets the path to a property in wise called "INSTALLDIR"
An embedded script must be used if the pathname is necessary in the application (i.e. like .reg files that set registry keys in registry)
The .reg file includes the following hardcoded lines:
Figure: The "REPLACE_ME" string is replaced with the value of the INSTALLDIR property in the .reg file
After setting up the wise file then running the build script, the application must be first tested on the developers' own machine.
Many developers forget to test the application outside the development environment completely and don't bother to install the application using the installation package they have just created.
Doing this will allow them to fix e.g. pathnames of images that might have been set to a relative path of the running process and not the relative path of the actual executable.
Bad code - FromFile() method (as well as Process.Start()) give the relative path of the running process. This could mean the path relative to the shortcut or the path relative to the .exe itself, and so an exception will be thrown if the image cannot be found when running from the shortcut
Good code - GetExecutingAssembly().Location will get the pathname of the actual executable and no exception will be thrown
This exception would never have been found if the developer didn't bother to test the actual installation package on his own machine.
Having tested on the developer's machine, the application must be tested on a virtual machine in a pure environment without dependencies installed in GAC, registry or anywhere else in the virtual machine.
Users may have MS Access 2000 installed and, the developer's application may behave differently on an older version of MS Access even though it works perfectly on MS Access 2003. The most appropriate way of handling this is to use programs like VM Ware or MS Virtual PC.
This will help the developer test the application on all possible environments to ensure that it caters for all users, minimizing the amount of assumptions as possible.
Many applications end up working perfectly on the developer's machine. However once the application is deployed into a setup package and ready for the public, the application could suddenly give the user the most horrible experience of his life. There are plenty of issues that developers don't take into consideration. Amongst the many issues, 3 can stand above the rest if the application isn't tested thoroughly:
We like to have debugging information in our application, so that we can view the line number information in the stack trace. However, we won't release our product in Debug mode, for example if we use "#if Debug" statement in our code we don't want them to be compiled in the release version.
If we want line numbers, we simply need Debugging Information . You can change an option in the project settings so these will be generated in when using Release build.
Figure: Code that should only run in Debug mode, we certainly don't want this in the release version.
Figure: Set "Generate Debugging Information" to True on the project properties page (VS 2003)Figure: Set "Debug Info" to "pdb-only" on the Advanced Build Settings page (VS 2005)
We like to have debugging information in our application, so that we can view the line number information in the stack trace. However, we won't release our product in Debug mode, for example if we use "#if Debug" statement in our code we don't want them to be compiled in the release version.
If we want line numbers, we simply need Debugging Information . You can change an option in the project settings so these will be generated in when using Release build.
Hungarian notation is used in VB6. In .NET, there are over 35,000 classes, so we can't just call them with three letter short form. We would suggest the developer use the full class name as in example below. As a result, the code will be much easier to read and follow up.
DateTime dt = new DateTime.Now();DataSet ds = new DataSet();// It could be confused with Date time.DataTable dt = ds.Tables[0];
Bad code - Without meaningful name
DateTime currentDateTime = new DateTime.Now();DataSet employmentDataSet = new DataSet();DataTable ContactDetailsDataTable = ds.Tables[0];
Hungarian notation is used in VB6. In .NET, there are over 35,000 classes, so we can't just call them with three letter short form. We would suggest the developer use the full class name as in example below. As a result, the code will be much easier to read and follow up.
Whenever we rename a file in Visual Studio .NET, the file becomes a new file in SourceSafe. If the file has been checked-out, the status of old file will remain as checked-out in SourceSafe.
The step by step to rename a file that under SourceSafe control:
Save and close the file in Visual Studio .NET, and check in the file if it is checked-out.
Open Visual SourceSafe Explorer and rename the file.
Rename it in Visual Studio .NET, click "Continue with change" to the 2 pop-up messages:
Figure: Warning message of renaming files under source control.Figure: You are seeing this as the new file name already exists in SourceSafe, just click "Continue with change".
Visual Studio .NET should find the file under source control and it will come up with a lock icon
Whenever we rename a file in Visual Studio .NET, the file becomes a new file in SourceSafe. If the file has been checked-out, the status of old file will remain as checked-out in SourceSafe.
The step by step to rename a file that under SourceSafe control:
Imagine that you have just had a User Acceptance Test (UAT), and your app has been reported as being "painfully slow" or "so slow as to be unusable". Now, as a coder, where do you start to improve the performance? More importantly, do you know how much your massive changes have improved performance - if at all?
We recommend that you should always use a code profiling tool to measure performance gains whilst optimising your application. Otherwise, you are just flying blind and making subjective, unmeasured decisions. Instead, use a tool such as JetBrains dotTrace profiler. These will guide you as to how to best optimise any code that is lagging behind the pack. You can run this on both ASP.NET and Windows Forms Applications. The optimisation process is as follows:
Profile the application with Jetbrains dotTrace using the "Hot Spot" tab to identify the slowest areas of your application
Figure: Identify which parts of your code take the longest (Hot Spots)
Some parts of the application will be out of your control e.g. .NET System Classes. Identify the slowest parts of code that you can actually modify from the Hot Spot listing
Determine the cause of the poor performance and optimise (e.g. improve the WHERE clause or the number of columns returned, reduce the number of loops or use a StringBuilder instead of string concatenation)
Re-run the profile to confirm that performance has improved
Repeat from Step 1 until the application is optimised
Imagine that you have just had a User Acceptance Test (UAT), and your app has been reported as being "painfully slow" or "so slow as to be unusable". Now, as a coder, where do you start to improve the performance? More importantly, do you know how much your massive changes have improved performance - if at all?
SSW Code Auditor, NUnit and Microsoft FxCop are tools to keep your code "healthy". That is why they should be easily accessible in every solution so that they can be run with a double click of a mouse button.
Create a New Project by selecting File > New Project and save it to your solution directory as " nunit.NUnit "
From the Project menu select Add Assembly
Select the Assembly (DLL/EXE) for the project that contains unit tests
Select File > Save Project
Open your Solution in Visual Studio
Right click and add existing file
Select the NUnit project file
Right click the newly added file and select " Open With "
Point it to the NUnit executable
Now you can simply double click these project files to run the corresponding applications.
We have a program called SSW Code Auditor that implements this rule.
SSW Code Auditor, NUnit and Microsoft FxCop are tools to keep your code "healthy". That is why they should be easily accessible in every solution so that they can be run with a double click of a mouse button.
Resource files provide a structured and centralized approach to storing and retrieving static scripts, eliminating the need for scattered code snippets and enhancing your development workflow.
Figure: The code in the first box, the string in the resource file in the 2nd box. This is the easiest to read + you can localize it eg. If you need to localize an Alert in the javascript
Figure: Add a recourse file into your project in VS2005Figure: Read value from the new added resource file
Resource files provide a structured and centralized approach to storing and retrieving static scripts, eliminating the need for scattered code snippets and enhancing your development workflow.
In v1.0 and v1.1 of .NET framework when serializing DateTime values with the XmlSerializer, the local time zone of machine would always been appended. And when deserializing on the receiving machine, DateTime values would be automatically adjusted based on time zone offset relative to the sender time zone.
Figure: Front-end code in .NET v1.1 (front end time zone: GTM+8)
[WebMethod] public DataSet GetByDateCreatedAndEmpID(DateTime DateCreated, StringEmpID){ EmpTimeDayDataSet ds = new EmpTimeDayDataSet(); m_EmpTimeDayAdapter.FillByDateCreatedAndEmpID(ds, DateCreated.Date, EmpID); return ds;}
Figure: Web service method (web service server time zone: GTM+10)
When front end calls this web method with the value of current local time (14/01/2006 11:00:00 PM GTM+8) for parameter 'DateCreated', it expects a returned result for date 14/01/2006, while the service end returns data of 15/01/2006, because 14/01/2006 11:00:00 PM (GTM+8) would be adjusted to be 15/01/2006 01:00:00 AM at the web service server (GTM+10)
In v1.1/v1.0 you have no way to control this serializing/deserializing behaviour on DateTime. In v2.0 with the new notion DateTimeKind you can get a workaround for above example.
Figure: Front-end code in .NET v2.0 (front end time zone: GTM+8)
In this way, the server end will always get a datetime value of 14/01/2006 11:00:00 without GTM offset and return what front-end expects.
In v1.0 and v1.1 of .NET framework when serializing DateTime values with the XmlSerializer, the local time zone of machine would always been appended. And when deserializing on the receiving machine, DateTime values would be automatically adjusted based on time zone offset relative to the sender time zone.
There are 2 type of connection strings. The first contains only address type information without authorization secrets. These can use all of the simpler methods of storing configuration as none of this data is secret.
Option 1 - Using Azure Managed Identities (Recommended)
When deploying an Azure hosted application we can use Azure Managed Identities to avoid having to include a password or key inside our connection string. This means we really just need to keep the address or url to the service in our application configuration. Because our application has a Managed Identity, this can be treated in the same way as a user's Azure AD identity and specific roles can be assigned to grant the application access to required services.
This is the preferred method wherever possible, because it eliminates the need for any secrets to be stored. The other advantage is that for many services the level of access control available using Managed Identities is much more granular making it much easier to follow the Principle of Least Privilege.
Option 2 - Connection Strings with passwords or keys
If you have to use some sort of secret or key to login to the service being referenced, then some thought needs to be given to how those secrets can be secured.Take a look at Do you store your secrets securely to learn how to keep your secrets secure.
Example - Integrating Azure Key Vault into your ASP.NET Core application
In .NET 5 we can use Azure Key Vault to securely store our connection strings away from prying eyes.
Azure Key Vault is great for keeping your secrets secret because you can control access to the vault via Access Policies. The access policies allows you to add Users and Applications with customized permissions. Make sure you enable the System assigned identity for your App Service, this is required for adding it to Key Vault via Access Policies.
publicstaticIHostBuilderCreateHostBuilder(string[] args) =>Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => {webBuilder .UseStartup<Startup>() .ConfigureAppConfiguration((context, config) => {// To run the "Production" app locally, modify your launchSettings.json file// -> set ASPNETCORE_ENVIRONMENT value as "Production"if (context.HostingEnvironment.IsProduction()) {IConfigurationRootbuiltConfig = config.Build();// ATTENTION://// If running the app from your local dev machine (not in Azure AppService),// -> use the AzureCliCredential provider.// -> This means you have to log in locally via `az login` before running the app on your local machine.//// If running the app from Azure AppService// -> use the DefaultAzureCredential provider//TokenCredentialcred = context.HostingEnvironment.IsAzureAppService() ?newDefaultAzureCredential(false) : newAzureCliCredential();varkeyvaultUri = newUri($"https://{builtConfig["KeyVaultName"]}.vault.azure.net/");varsecretClient = newSecretClient(keyvaultUri, cred);config.AddAzureKeyVault(secretClient, newKeyVaultSecretManager()); } }); });
Tip: You can detect if your application is running on your local machine or on an Azure AppService by looking for the WEBSITE_SITE_NAME environment variable. If null or empty, then you are NOT running on an Azure AppService.
Azure Key Vault and App Services can easily trust each other by making use of System assigned Managed Identities. Azure takes care of all the complicated logic behind the scenes for these two services to communicate with each other - reducing the complexity for application developers.
So, make sure that your Azure App Service has the System assigned identity enabled.
Once enabled, you can create a Key Vault Access policy to give your App Service permission to retrieve secrets from the Key Vault.
Figure: Enabling the System assigned identity for your App Service - this is required for adding it to Key Vault via Access Policies
Adding secrets into Key Vault is easy.
Create a new secret by clicking on the Generate/Import button
Provide the name
Provide the secret value
Click Create
Figure: Creating the SqlConnectionString secret in Key Vault.Figure: SqlConnectionString stored in Key Vault
Note: The ApplicationSecrets section is indicated by "ApplicationSecrets--" instead of "ApplicationSecrets:".
As a result of storing secrets in Key Vault, your Azure App Service configuration (app settings) will be nice and clean. You should not see any fields that contain passwords or keys. Only basic configuration values.
Figure: Your WebApp Configuration - No passwords or secrets, just a name of the Key vault that it needs to access
Video: Watch SSW's William Liebenberg explain Connection Strings and Key Vault in more detail (8 min)
History of Connection Strings
In .NET 1.1 we used to store our connection string in a configuration file like this:
<configuration><appSettings><addkey="ConnectionString"value ="integrated security=true; data source=(local);initial catalog=Northwind"/></appSettings></configuration>
...and access this connection string in code like this:
Historical example - Old ASP.NET 1.1 way, untyped and prone to error
In .NET 2.0 we used strongly typed settings classes:
Step 1: Setup your settings in your common project. E.g. Northwind.Common
Figure: Settings in Project Properties
Step 2: Open up the generated App.config under your common project. E.g. Northwind.Common/App.config
Step 3:Copy the content into your entry applications app.config. E.g. Northwind.WindowsUI/App.config The new setting has been updated to app.config automatically in .NET 2.0
Historical example - Access our connection string by strongly typed generated settings class...this is no longer the best way to do it
There are 2 type of connection strings. The first contains only address type information without authorization secrets. These can use all of the simpler methods of storing configuration as none of this data is secret.
Option 1 - Using Azure Managed Identities (Recommended)
When deploying an Azure hosted application we can use Azure Managed Identities to avoid having to include a password or key inside our connection string. This means we really just need to keep the address or url to the service…
Both SQL Server authentication (standard security) and Windows NT authentication (integrated security) are SQL Server authentication methods that are used to access a SQL Server database from Active Server Pages (ASP).
We recommend you use the Windows NT authentication by default, because Windows security services operate by default with the Microsoft Active Directory?directory service, it is a derivative best practice to authenticate users against Active Directory. Although you could use other types of identity stores in certain scenarios, for example Active Directory Application Mode (ADAM) or Microsoft SQL Server? these are not recommended in general because they offer less flexibility in how you can perform user authentication.
Figure: Good example - Use Windows Integrated Authentication connection string by default
<connectionStrings><addname="ConnectionString"connectionString="Server=(local); Database=NorthWind;uid=sa;pwd=sa;"/><!--It can't use the Windows Integrated because they are using Novell --></connectionStrings>
Figure: Good example - Not use Windows Integrated Authentication connection string with comment
Both SQL Server authentication (standard security) and Windows NT authentication (integrated security) are SQL Server authentication methods that are used to access a SQL Server database from Active Server Pages (ASP).
Most systems will have variables that need to be stored securely; OpenId shared secret keys, connection strings, and API tokens to name a few.
These secrets must not be stored in source control. It is insecure and means they are sitting out in the open, wherever code has been downloaded, for anyone to see.
There are many options for managing secrets in a secure way:
Bad Practices
Store production passwords in source control
Pros:
Minimal change to existing process
Simple and easy to understand
Cons:
Passwords are readable by anyone who has either source code or access to source control
Difficult to manage production and non-production config settings
Developers can read and access the production password
Tightly integrated into Azure so if you are running on another provider or on premises, this may be a concern. Authentication into Key Vault now needs to be secured.
Figure: Good practice - Overall rating 9/10
Avoid using secrets with Azure Managed Identities
The easiest way to manage secrets is not to have them in the first place. Azure Managed Identities allows you to assign an Azure AD identity to your application and then allow it to use its identity to log in to other services. This avoids the need for any secrets to be stored.
Pros:
Best solution for cloud (Azure) solutions
Enterprise grade
Access granted based on Azure AD permissions - no need to 'securely' share passwords with colleagues
Roles can be granted to your application your CI/CD pipelines at the time your services are deployed
Cons:
Only works where Azure AD RBAC is available. NB. There are still some Azure services that don't yet support this. Most do though.
Figure: Good practice - Overall rating 10/10
Resources
The following resources show some concrete examples on how to apply the principles described:
Most systems will have variables that need to be stored securely; OpenId shared secret keys, connection strings, and API tokens to name a few.
These secrets must not be stored in source control. It is insecure and means they are sitting out in the open, wherever code has been downloaded, for anyone to see.
You may be asking what's a secret for a development environment? A developer secret is any value that would be considered sensitive.
Most systems will have variables that need to be stored securely; OpenId shared secret keys, connection strings, and API tokens to name a few. These secrets must not be stored in source control. It's not secure and means they are sitting out in the open, wherever code has been downloaded, for anyone to see.
There are different ways to store your secrets securely. When you use .NET User Secrets, you can store your secrets in a JSON file on your local machine. This is great for development, but how do you share those secrets securely with other developers in your organization?
Video: Do you share secrets securely | Jeoffrey Fischer (7min)
An encryption key or SQL connection string to a developer's local machine/container is a good example of something that will not always be sensitive for in a development environment, whereas a GitHub PAT token or Azure Storage SAS token would be considered sensitive as it allows access to company-owned resources outside of the local development machine.
❌ Bad practices
❌ Do not store secrets in appsettings.Development.json
The appsettings.Development.json file is meant for storing development settings. It is not meant for storing secrets. This is a bad practice because it means that the secrets are stored in source control, which is not secure.
Figure: Bad practice - Overall rating: 1/10
❌ Do not share secrets via email/Microsoft Teams
Sending secrets over Microsoft Teams is a terrible idea, the messages can land up in logs, but they are also stored in the chat history. Developers can delete the messages once copied out, although this extra admin adds friction to the process and is often forgotten.
Note: Sending the secrets in email, is less secure and adds even more admin for trying to remove some of the trace of the secret and is probably the least secure way of transferring secrets.
Figure: Bad practice - Overall rating: 3/10
✅ Good practices
✅ Remind developers where the secrets are for a project
For development purposes once you are using .NET User Secrets you will still need to share them with other developers on the project.
Figure: User Secrets are stored outside the development folder
As a way of giving a heads up to other developers on the project, you can add a step in your _docs\Instructions-Compile.md file (see rule on making awesome documentation) to inform developers to get a copy of the user secrets. You can also add a placeholder to the appsettings.Development.json file to remind developers to add the secrets.
Figure: Good practice - Remind developers where the secrets are for this project
✅ Use 1ty.me to share secrets securely
Using a site like 1ty.me allows you to share secrets securely with other developers on the project.
Pros:
Simple to share secrets
Free
Cons:
Requires a developer to have a copy of the secrets.json file already
Developers need to remember to add placeholders for developer specific secrets before sharing
Access Control - Although the link is single use, there's no absolute guarantee that the person opening the link is authorized to do so
Figure: Good practice - Overall rating 8/10
✅ Use Azure Key Vault
Azure Key Vault is a great way to store secrets securely. It is great for production environments, although for development purposes it means you would have to be online at all times.
Pros:
Enterprise grade
Uses industry standard best encryption
Dynamically cycles secrets
Access Control - Access granted based on Azure AD permissions - no need to 'securely' share passwords with colleagues
Cons:
Not able to configure developer specific secrets
No offline access
Tightly integrated into Azure so if you are running on another provider or on premises, this may be a concern
Authentication into Key Vault requires Azure service authentication, which isn't supported in every IDE
Enterprise Secret Management tools have are great for storing secrets for various systems across the whole organization. This includes developer secrets
Pros:
Developers don't need to call other developers to get secrets
Placeholders can be placed in the stored secrets
Access Control - Only developers who are authorized to access the secrets can do so
Cons:
More complex to install and administer
Paid Service
Figure: Good practice - Overall rating 10/10
Tip: You can store the full secrets.json file contents in the enterprise secrets management tool.
Most enterprise secrets management tool have the ability to retrieve the secrets via an API, with this you could also store the UserSecretId in a field and create a script that updates the secrets easily into the correct secrets.json file on your development machine.
You may be asking what's a secret for a development environment? A developer secret is any value that would be considered sensitive.
Most systems will have variables that need to be stored securely; OpenId shared secret keys, connection strings, and API tokens to name a few. These secrets must not be stored in source control. It's not secure and means they are sitting out in the open, wherever code has been downloaded, for anyone to see.
There are different ways to store your secrets securely. When you use .NET User Secrets, you can store your secrets in a JSON file on your local machine. This is great for development, but how do you share those secrets securely with other developers in your organization?
It is a good practice to highlight string variables or const in source code editor of Visual Studio to make them clear. Strings can be easily found especially you have long source code.
Default string appearance
Highlighted string appearance
Tools | Options form of Visual Studio
It is a good practice to highlight string variables or const in source code editor of Visual Studio to make them clear. Strings can be easily found especially you have long source code.
Windows Command Processor (cmd.exe) cannot run batch files (.bat) in Visual Studio because it does not take the files as arguments. One way to run batch files in Visual Studio is to use PowerShell.
Bad example - Using Windows Command Processor (cmd.exe) for running batch files.
Good example - Using PowerShell for running batch files
Windows Command Processor (cmd.exe) cannot run batch files (.bat) in Visual Studio because it does not take the files as arguments. One way to run batch files in Visual Studio is to use PowerShell.
Developers understand the importance of the F5 experience. Sadly, lots of projects are missing key details that are needed to make setup easy.
Let's look at the ways to optimize the experience. There are 4 levels of experience that can be delivered to new developers on a project:
Level 1: Step by step documentation
This is the most important milestone to reach because it contains the bare minimum to inform developers about how to run a project.
The rule on awesome documentation teaches us all the documents needed for a project and how to struture them.
The README.md and Instructions-Compile.md are the core documents that are essential for devs to get running on a project.
Bad example - A project without instructions
Good example - A project with instructions
Tip: In addition to pre-requisites, make sure to mention what isn't supported and any other problems that might come up.
E.g. Problems to check for:
Windows 8 not supported
Latest backup of the database
npm version
Tip: Don't forget about the database, your developers need to know how to work with the database
Figure: Don't forget about the database!
Level 2: Less documentation using a PowerShell script
A perfect solution would need no static documentation. Perfect code would be so self-explanatory that it did not need comments. The same rule applies with instructions on how to get the solution compiling. A PowerShell script is the first step towards this nirvana.
Note: You should be able to get latest and compile within 1 minute. Also, a developer machine should not have to be on the domain (to support external consultants)
All manual workstation setup steps should be scripted with PowerShell, as per the below example:
Problem: Azure environment variable run state directory is not configured \_CSRUN\_STATE\_DIRECTORY.
Problem: Azure Storage Service is not running. Launch the development fabric by starting the solution.
WARNING: Abandoning remainder of script due to critical failures.
To try and automatically resolve the problems found, re-run the script with a -Fix flag.
Figure: Good example - A PowerShell script removes human error and identifies problems in the devs environment so they can be fixed
Level 3: Less maintenance using Docker containerization
Figure: Docker Logo
PowerShell scripts are cool, but they can be difficult to maintain and they cannot account for all the differences within each developers environment. This problem is exacerbated when a developer comes back to a project after a long time away.
Docker can solve this problem and make the experience even better for your developers. Docker containerization helps to standardize development environments. By using docker containers developers won't need to worry about the technologies and versions installed on their device. Everything will be set up for them at the click of a button.
Level 4: More standardization using dev containers
Dev containers take the whole idea of docker containerization to another level. By setting up a repo to have the right configuration, the dev team can be certain that every developer is going to get the exact same experience.
Stored procedure names in code should always be prefixed with the owner (usually dbo). This is because if the owner is not specified, SQL Server will look for a procedure with that name for the currently logged on user first, creating a performance hit.
We have a program called SSW Code Auditor to check for this rule.
Stored procedure names in code should always be prefixed with the owner (usually dbo). This is because if the owner is not specified, SQL Server will look for a procedure with that name for the currently logged on user first, creating a performance hit.
In C#, backslashes in strings are special characters used to produce "escape sequences", for example \r\n creates a line break inside the string. This means that if you want to put a backslash in a string you must escape it out by inserting two backslashes for every one, e.g. to represent C:\Temp\MyFile.txt you would use C:\Temp\MyFile.txt . This makes the file paths hard to read, and you can't copy and paste them out of the application.
By inserting an @ character in front of the string, e.g. @"C:\Temp\MyFile.txt" , you can turn off escape sequences, making it behave like VB.NET. File paths should always be stored like this in strings.
We have a program called SSW Code Auditor to check for this rule.
In C#, backslashes in strings are special characters used to produce "escape sequences", for example \r\n creates a line break inside the string. This means that if you want to put a backslash in a string you must escape it out by inserting two backslashes for every one, e.g. to represent C:\Temp\MyFile.txt you would use C:\Temp\MyFile.txt . This makes the file paths hard to read, and you can't copy and paste them out of the application.
Web service and web invoking becomes more and more popular today as the distributed systems are widely deployed. However, the normal method invoking may cause a disaster when apply to web method because transmitting data over Internet may cause your program to hang for a couple of minutes.
:::Figure: Bad example - Invoke web method by the normal way (because this will hang your UI thread)
:::
The correct way to invoke web method is using asynchronous call to send a request and use the delegated CallBack method to read the response, see code below:
Web service and web invoking becomes more and more popular today as the distributed systems are widely deployed. However, the normal method invoking may cause a disaster when apply to web method because transmitting data over Internet may cause your program to hang for a couple of minutes.
Every application has different settings depending on the environment it is running on, e.g. production, testing or development environment.It is much easier and efficient if app.config is provided in several environment types, so then the developer can just copy and paste the required app.config.
Figure: Bad Example - Only 1 App.config provided
Figure : Good Example - Several App.config are provided
Every application has different settings depending on the environment it is running on, e.g. production, testing or development environment.
It is much easier and efficient if app.config is provided in several environment types, so then the developer can just copy and paste the required app.config.
Figure: Bad Example - Only 1 App.config provided
Figure : Good Example - Several App.config are provided
If you projects is generated by code generators (Code Smith, RAD Software NextGeneration, etc.), you should make sure it will be regenerated easily.
Code generators can be used to generate whole Windows and Web interfaces, as well as data access layers and frameworks for business layers, making them an excellent time saver. However making the code generators generate your projects for the first time takes much time and involves lots of configurations.
In order to make it easier to do the generation next time, we recommend you putting the command line of operations into a file called "_Regenerate.bat". When you want to generate it next time, just run the bat file and all things are done in a blink.
Figure: An example of command line of Code Smith for NorthWind. Thus "_Regenerate.bat" file must exist in your projects (of course so must other necessary resources).
Figure: Good - Have _Regenerate.bat in the solution
If you projects is generated by code generators (Code Smith, RAD Software NextGeneration, etc.), you should make sure it will be regenerated easily.
Keeping your projects tidy says good things about the teams maturity. Therefore any files and folders that are prefixed with zz must be deleted from the project.
Figure: Bad example - Zz'ed files should not exist in Source Control
Figure: Good example - No zz'ed files in Source Control
Keeping your projects tidy says good things about the teams maturity. Therefore any files and folders that are prefixed with zz must be deleted from the project.
When you decide to use TFS 2012, you have the option to choose from different methodologies (aka. Process Templates).
Choosing the right template to fit into your environment is very important.
Figure: Built-in Process Templates in Visual Studio 2012 with TFS 2012
It is recommended to use the top option, the Scrum one. If you think the built-in template is not going to fulfil your needs, customize it and create your own.
If you want help customising your own Process Template, call a TFS guru at SSW on +61 2 9953 3000.
When you decide to use TFS 2012, you have the option to choose from different methodologies (aka. Process Templates).
Choosing the right template to fit into your environment is very important.
Fixing the Option Strict problem is one of the most annoying aspects of the Visual Basic development environment relates to Microsofts' decision to allow late binding. By turning Option Strict Off by default, many type-casting errors are not caught until runtime. You can make VB work the same as other MS languages (which always do strict type-checking at design time) by modifying these templates.
So, always set Option Strict On right from the beginning of the development.
Before you do this, you should first back up the entire VBWizards directory. If you make a mistake, then the templates will not load in the VS environment. You need to be able to restore the default templates if your updates cause problems.
To configure each template to default Option Strict to On rather than Off, load each .vbproj template with VB source code into an editor like Notepad and then change the XML that defines the template. For example, to do this for the Windows Application template, load the file: Windows Application\Templates\1033\WindowsApplication.vbproj
Technically, you do not have to add the Option Explicit directive, because this is the default for VB; but I like to do it for consistency. Next, you must save the file and close Notepad. Now, if you load a new Windows Application project in the VS environment and examine Project Properties, you will see that Option Strict has been turned on by default.
Figure:Bad Example – Option Strict is Off
Figure:Good Example – Option Strict is On
In order for this setting to take effect for all project types, you must update each of the corresponding .vbproj templates. After making the changes on your system, you will need to deploy the new templates to each of your developers' machines in order for their new projects to derive from the updated templates.
However, sometimes we don't do this because of too much work. In some scenarios, such as Wrappers around the COM code, and Outlook stuff with object model, there is going to be lots of work to fix all the type-checking errors. Actually it is necessary to use Object type as parameters or variables when you deal with COM components.
Fixing the Option Strict problem is one of the most annoying aspects of the Visual Basic development environment relates to Microsofts' decision to allow late binding. By turning Option Strict Off by default, many type-casting errors are not caught until runtime. You can make VB work the same as other MS languages (which always do strict type-checking at design time) by modifying these templates.
When creating NuGet packages, it is better to create few small packages instead of creating one monolithic package that combines several relatively independent features.
When you are making a decision to package your reusable code and publish it to NuGet sometimes it is worths splitting your package into few smaller packages. This will improve maintainability and transparency of your package. It will also make it much easier to consume and contribute to.
Lets assume you have created a set of libraries that add extra functionality to web applications. Some libraries classes work with both ASP.NET MVC and ASP.NET WebForms projects, some are specific to ASP.NET MVC and some are related to security. Each library may also have external dependencies on some other NuGet packages. One way to package your libraries would be to create a single YourCompany.WebExtensions package and publish it to NuGet. Sounds like a great idea, but it has number of issues. What if someone only wants to use some MVC specific classes from your package, they would still have to add your whole package, which will add some other external dependencies that you will never use.
A better approach would be to split your libraries into 3 separate packages: YourCompany.WebExtensions.Core , YourCompany.WebExtensions.MVC and YourCompany.WebExtensions.Security . YourCompany.WebExtensions.Core will only contain core libraries that can be used in both ASP.NET WebForm and MVC. YourCompany.WebExtensions.MVC package will contain only MVC specific code and will have a dependency on the Core package. YourCompany.WebExtensions.Security will only contain classes that are related to security. This will give consumer a choice as well as better transparency to the features you are trying to offer. It will also have a better maintainabilty, as one team can work on one package while you are working on another one. Patches and enhancements can also be introduced much easier.
Figure: Bad Example - One big library with lots of features, where most of them are obsolete with a release of ASP.NET MVC 5
Figure: Good Example - Lots of smaller self contained packaged with a single purpose
When creating NuGet packages, it is better to create few small packages instead of creating one monolithic package that combines several relatively independent features.
Before starting a software project and evaluating a new technology, it is important to know what the best practices are. The easiest way to get up and running is by looking at a sample application. Below is a list of sample applications that we’ve curated and given our seal of approval.
SSW Northwind Traders
A reference application built using Clean Architecture, Angular 8, EF Core 7, ASP.NET Core 7, Duende Identity Server 6.
eShopOnWeb
Sample ASP.NET Core 6.0 reference application, powered by Microsoft, demonstrating a layered application architecture with monolithic deployment model. Download the eBook PDF from docs folder.
eShopOnContainers
Cross-platform .NET sample microservices and container based application that runs on Linux Windows and macOS. Powered by .NET 7, Docker Containers and Azure Kubernetes Services. Supports Visual Studio, VS for Mac and CLI based environments with Docker CLI, dotnet CLI, VS Code or any other code editor.
ContosoUniversity
This application takes the traditional Contoso University sample applications (of which there have been many), and try to adapt it to how our "normal" ASP.NET applications are built.
Blazor
Awesome Blazor Browser
A Blazor example app that links to many other useful Blazor examples
Blazor Workshop
A Blazor workshop showing how to build fast food website
UI - Angular
Tour of Heroes
Default Angular sample app as part of the documentation
ngrx Example App
Example application utilizing @ngrx libraries, showcasing common patterns and best practices
Before starting a software project and evaluating a new technology, it is important to know what the best practices are. The easiest way to get up and running is by looking at a sample application. Below is a list of sample applications that we’ve curated and given our seal of approval.
When you obtain a 3rd party .dll (in-house or external), you sometimes get the code too. So should you:
reference the Project (aka including the source) or
reference the assembly?
When you face a bug, there are 2 types of emails you can send:
Dan, I get this error calling your Registration.dll? or
Dan, I get this error calling your Registration.dll and I have investigated it. As per our conversation, I have changed this xxx to this xxx.
The 2nd option is preferable.The simple rule is:
If there are no bugs then reference the assembly, and
If there are bugs in the project (or any project it references [See note below]) then reference the project.
Since most applications have bugs, therefore most of the time you should be using the second option.
If it is a well tested component and it is not changing constantly, then use the first option.
Add the project to solution (if it is not in the solution). Add existing project Figure: Add existing project
Select the "References" folder of the project you want to add references to, right click and select "Add Reference...".
Add reference Figure: Add reference
Select the projects to add as references and click OK. Select projects to reference Figure: Select the projects to add as references
Note: We have run into a situation where we reference a stable project A, and an unstable project B. Project A references project B. Each time project B is built, project A needs to be rebuilt.
Now, if we reference stable project A by dll, and unstable project B by project according to this standard, then we might face referencing issues, where Project A will look for another version of Project B ?the one it is built to, rather than the current build, which will cause Project A to fail.
To overcome this issue, we then reference by project rather than by assembly, even though Project A is a stable project. This will mitigate any referencing errors.
When you obtain a 3rd party .dll (in-house or external), you sometimes get the code too. So should you:
reference the Project (aka including the source) or
If we lived in a happy world with no bugs, I would be recommending this approach of using shared components from source safe. As per the prior rule, you can see we like to reference "most" .dlls by project.However if you do choose to reference a .dll without the source, then the important thing is that if the .dll gets updated by another developer, then there is *nothing* to do for all other developers ?they get the last version when they do your next build. Therefore you need to follow this:
As the component user, there are six steps, but you only need to do them once:
First, we need to get the folder and add it to our project, so in SourceSafe, right click your project and create a subfolder using the Create Project (yes, it is very silly name) menu. Use Create VSS FolderFigure: Create 'folder' in Visual Source Safe Name it References
Use References FolderFigure: 'References' folder
Share the dll from the directory, so if I want SSW.Framework.Configuration, I go to $/ssw/SSWFramework/Configuration/bin/Release/
I select both the dll and the dll.xml files, right-click and drag them into my $/ssw/zzRefs/References/ folder that I just created in step 1.
Use Dlls XmlFigure: Select the dlls that I want to useUse right click to shareFigure: Right drag, and select "Share"
Still in SourceSafe, select the References folder, run get latest?to copy the latest version onto your working directory.
Use Get LatestFigure: Get Latest from Visual Source Safe VSS may ask you if you want to create the folder, if it doesnt exist. Yes, we do.
Back in VS.NET, select the project and click the show-all files button in the solution explorer, include the References folder into the project (or get-latest if its already there)
Use Include InvsFigure: Include the files into the current project
IMPORTANT! If the files are checked-out to you when you include them into your project, you MUST un-do checkout immediately.
You should never check in these files, they are for get-latest only.
Use Undo CheckoutFigure: Undo Checkout, when VS.NET checked them out for you...
Add Reference?in VS.NET, browse to the References?subfolder and use the dll there.
IMPORTANT! You need to keep your 'References' folder, and not check the files directly into your bin directory. Otherwise when you 'get latest', you won't be able to get the latest shared component.
All done. In the future, whenever you do get-latest?on the project, the any updated dlls should come down and be linked the next time you compile. Also, if anyone checks out your project from Source Safe, they will have the project linked and ready to go.
If we lived in a happy world with no bugs, I would be recommending this approach of using shared components from source safe. As per the prior rule, you can see we like to reference "most" .dlls by project.
However if you do choose to reference a .dll without the source, then the important thing is that if the .dll gets updated by another developer, then there is *nothing* to do for all other developers ?they get the last version when they do your next build. Therefore you need to follow this:
When a new developer joins a project, there is often a sea of information that they need to learn right away to be productive. This includes things like who the Product Owner and Scrum Master are, where the backlog is, where staging and production environments are, etc.
Make it easy for the new developer by putting all this information in a central location like the Visual Studio dashboard.
Note: As of October 2021, this feature is missing in GitHub Projects.
Figure: Bad example - Don't stick with the default dashboard, it's almost useless
Figure: Good example - This dashboard contains all the information a new team member would need to get started
When the daily standups occur and when the next Sprint Review is scheduled
The current Sprint backlog
Show the current build status
Show links to:
Staging environment
Production environment
Any other external service used by the project e.g. Octopus Deploy, Application Insights, RayGun, Elmah, Slack
Your solution should also contain the standard _Instructions.docx file for additional details on getting the project up and running in Visual Studio.
For particularly large and complex projects, you can use an induction tool like SugarLearning to create a course for getting up to speed with the project.
Figure: SugarLearning induction tool
When a new developer joins a project, there is often a sea of information that they need to learn right away to be productive. This includes things like who the Product Owner and Scrum Master are, where the backlog is, where staging and production environments are, etc.
Have you ever seen dialogs raised on the server-side? These dialogs would hang the thread they were on, and hang IIS until they were dismissed. In this case, you might use Trace.Fail or set AssertUIEnabled="true" in your web.config.
See Scott's blog Preventing Dialogs on the Server-Side in ASP.NET or Trace.Fail considered Harmful
public static void ExceptionFunc(string strException)
{
System.Diagnostics.Trace.Fail(strException);
}
Figure: Never use Trace.Fail <configuration>
<system.diagnostics>
<assert AssertUIEnabled="true" logfilename="c:\log.txt" />
</system.diagnostics>
</configuration>
Figure: Never set AssertUIEnabled="true" in web.config <configuration>
<system.diagnostics>
<assert AssertUIEnabled="false" logfilename="c:\log.txt" />
</system.diagnostics>
</configuration>
Figure: Should set AssertUIEnabled="false" in web.config
Have you ever seen dialogs raised on the server-side? These dialogs would hang the thread they were on, and hang IIS until they were dismissed. In this case, you might use Trace.Fail or set AssertUIEnabled="true" in your web.config.
Developers love the feeling of getting a new project going for the first time. Unfortunately, the process of making things work is often a painful experience. Every developer has a different setup on their PC so it is common for a project to require slightly different steps.
The old proverb is "Works on my machine!"
Luckily, there is a way to make all development environments 100% consistent.
Video: Dev Containers from Microsoft (was Remote Containers) with Piers Sinclair (5 min)
Dev Containers let you define all the tools needed for a project in a programmatic manner. That gives 3 key benefits:
Locally works awesome if you have a powerful PC. However, sometimes you might need to give an environment to people who don't have a powerful PC or you might want people to develop on an iPad. In that case it's time to take advantage of the cloud.
Developers love the feeling of getting a new project going for the first time. Unfortunately, the process of making things work is often a painful experience. Every developer has a different setup on their PC so it is common for a project to require slightly different steps.
The old proverb is "Works on my machine!"
Luckily, there is a way to make all development environments 100% consistent.
Often, developers jump onto a new project only to realize they can't get the SQL Server instance running, or the SQL Server setup doesn't work with their machine.
Even if they are able to install SQL Server, developers have a better option with a smaller footprint on their dev machine. Containers give them the ability to work on multiple projects with different clients. In a word "Isolation" baby!
Using Docker to run SQL Server in a container resolves common problems and provides numerous benefits:
Video: Run SQL Server in Docker! (5 min)
In the video, Jeff walks through how and why to run SQL in a container. However, you should not use the Docker image he chose to use in the video.
For SQL Server with Docker you have a couple of choices being:
Microsoft SQL Server - mcr.microsoft.com/mssql/server
Warning: If you have an ARM chip, the Docker image in the video is not for you. Instead use "Azure-Sql-Edge"
Benefits
✅ Isolation: Docker enables you to create separate networks with SQL Server and control access, allowing for multiple instances on a single PC. More importantly if you are a consultant and work on different projects, you need this
✅ Fast to get Ready to Run (without installing): Docker eliminates the need for repetitive and mundane configuration tasks, speeding up your SQL Server setup. This is especially beneficial for a CI/CD pipeline
✅ Testing Flexibility: Docker allows for testing against different versions of SQL Server simply by changing an image tag or SQL Server type in the environment variable
✅ Resetting for Testing: The contents of the image are immutable meaning that it is easy to remove the container, and spin up a new one with the original state. In short, Docker provides the ability to easily reset all changes for fresh testing scenarios
✅ Transparent Configuration: Docker provides clear and explicit configuration steps in the Dockerfile and docker-compose.yml
✅ Cross-Platform: These days developers in a team have different Operating Systems. The Docker engine runs on many operating systems, making it ideal for diverse development environments
Figure: Bad example - Running a SQL Server environment outside a container
Figure: Good example - Using Docker to containerize a SQL Server environment
Often, developers jump onto a new project only to realize they can't get the SQL Server instance running, or the SQL Server setup doesn't work with their machine.
Even if they are able to install SQL Server, developers have a better option with a smaller footprint on their dev machine. Containers give them the ability to work on multiple projects with different clients. In a word "Isolation" baby!
Using Docker to run SQL Server in a container resolves common problems and provides numerous benefits:
Traditional controllers require a lot of boilerplate code to set up and configure. Most of the time your endpoints will be simple and just point to a mediator handler.
Minimal APIs are a simplified approach for building fast HTTP APIs with ASP.NET Core. You can build fully functioning REST endpoints with minimal code and configuration. Skip traditional scaffolding and avoid unnecessary controllers by fluently declaring API routes and actions.
Check out the Microsoft Docs for more information on Minimal APIs.
Figure: Bad Example - 9 lines of code for a simple endpoint
app.MapGet("/", () => "Hello World!");
Figure: Good Example - 1 line of code for a simple endpoint
Minimal APIs are great for
Learning
Quick prototypes
Vertical Slice Architecture
A similar developer experience to NodeJS
Performance
Traditional controllers require a lot of boilerplate code to set up and configure. Most of the time your endpoints will be simple and just point to a mediator handler.
Minimal APIs are a simplified approach for building fast HTTP APIs with ASP.NET Core. You can build fully functioning REST endpoints with minimal code and configuration. Skip traditional scaffolding and avoid unnecessary controllers by fluently declaring API routes and actions.
Check out the Microsoft Docs for more information on Minimal APIs.
When working on large enterprise scale projects .NET Solutions can often become unwieldy and difficult to maintain. This is particularly true of .csproj files which end up repeating configuration across all projects. How can one file save you hours of maintenance by keeping project configuration DRY?
What is a Directory.Build.props file?
A Directory.Build.props file is an MSBuild file used in .NET projects to define common properties and configurations that apply to multiple projects within a directory tree. This file helps centralize the configuration and reduce redundancy by allowing you to specify settings that will be inherited by all projects under the directory where the file is located.
When working on large enterprise scale projects .NET Solutions can often become unwieldy and difficult to maintain. This is particularly true of .csproj files which end up repeating configuration across all projects. How can one file save you hours of maintenance by keeping project configuration DRY?
Logging is a critical component in modern applications, but it can easily introduce performance overhead.
.NET 6 introduced the LoggerMessageAttribute, a feature in the Microsoft.Extensions.Logging namespace that enables source-generated, highly performant logging APIs. This approach eliminates runtime overheads like boxing and temporary allocations, making it faster than traditional logging methods.
Key performance benefits of LoggerMessageAttribute
Source Generation: Automatically generates the implementation of partial methods with compile-time diagnostics.
Improved Performance: Reduces runtime overhead by leveraging compile-time optimizations.
Flexible Usage: Supports static and instance-based methods with configurable log levels and message templates.
How to use LoggerMessageAttribute
Define logging methods as partial and static to trigger the code generator:
publicstaticpartialclassLog{ [LoggerMessage(EventId = 0,Level = LogLevel.Critical,Message = "Could not open socket to `{HostName}`")]publicstaticpartialvoidCouldNotOpenSocket(ILoggerlogger, stringhostName);}
Logging methods can also be used in an instance context by accessing an ILogger field or primary constructor parameter:
publicpartialclassInstanceLoggingExample(ILogger logger){ [LoggerMessage(EventId = 0,Level = LogLevel.Critical,Message = "Could not open socket to `{HostName}`")]publicpartialvoidCouldNotOpenSocket(stringhostName);}
Using LoggerMessageAttribute with JsonConsole formatter can produce structured logs.
In our log messages we can specify custom event names as well as utelize string formatters:
[LoggerMessage(EventId = 9,Level = LogLevel.Trace,EventName = "PropertyValueEvent")]Message = "In {City} the average property value is {Value:E}")]publicstaticpartialvoidPropertyValueInAustralia(ILoggerlogger, stringcitydoublevalue);
Constraints
When using LoggerMessageAttribute, ensure:
Logging methods must be partial and return void.
Logging method names must not start with an underscore.
Parameter names of logging methods must not start with an underscore.
Logging methods may not be defined in a nested type.
Logging methods cannot be generic.
If a logging method is static, the ILogger instance is required as a parameter.
Logging is a critical component in modern applications, but it can easily introduce performance overhead.
.NET 6 introduced the LoggerMessageAttribute, a feature in the Microsoft.Extensions.Logging namespace that enables source-generated, highly performant logging APIs. This approach eliminates runtime overheads like boxing and temporary allocations, making it faster than traditional logging methods.