by Emanuel Scirlet | Nov 3, 2015 | .NET, C#
Hello, as promised this is a continuation of Part II of the C# threading series. I hope you your time is well spent here.
Locking
Exclusive locking ensures that only one thread can access a particular section of the code at any given time. The main mechanisms used for exclusive locking are lock and mutex.
For non-exclusive locking semaphores are used. The difference between the two is that while locking is easier and faster, mutex allows to span locking across applications and different processes, for instance a common resource like a file.
[code lang=”csharp”]
class Test{
int x=5;
void Calc(){
x+=5;
Console.WriteLine(x);
}
}
[/code]
This class is not thread-safe and the output is unpredictable. The thread safe code is to be found below and makes use of locks.
[code lang=”csharp”]
class Test{
int x=5;
object _lock = new object();
void Calc(){
lock(_locker){
x+=5;
Console.WriteLine(x);
}
}
}
[/code]
Only one thread can lock the synchronizing object at a time, while any other threads are blocked until the lock is released.
If other threads are trying to lock the object they’re going to be served in a queue but with no guaranteed order. Threads that are waiting for accessing a locked area are in a WaitSleepJoin state.
There is a trick here: if an exception happens in the lock section then that part of the code won’t be released and other threads will be waiting for ever. Aside from that there are more subtle details here on Monitors but won’t get into that now.
Deadlocking is one of the hardest problems in multi-threading, especially when there are many objects that interact with each other. The main problem is that you can’t be sure what locks your caller has taken out.
So, you might lock a private field a within your class X, unaware that your caller (or caller’s caller) has already locked the field b within class Y. Meanwhile, another thread is doing exactly the reverse, creating a deadlock.
Another example of deadlocking arises when calling Dispatcher.Invoke (in WPF) or Control.Invoke (in Windows Forms) while in possession of a lock. If the UI happens to be running another method that’s waiting on the same lock, a deadlock will occur. This can often be fixed simply by calling BeginInvoke instead of Invoke. Alternatively, you can release your lock before calling Invoke, although this won’t work if your caller took out the lock.
So be careful, locking is great but dangerous and as far as performance goes, locking is fast, nanoseconds fast.
Mutex, yey!
They basically do the same thing as locks but across processes. One common use case is to ensure that an application runs on a machine as a single instance.
Performance-wise mutex is slower than lock and it’s done in order of microseconds.
[code lang=”csharp”]
class Test
{
void Main()
{
// Naming a mutex makes it available computer-wide.
// Ensure you have a uniqure name for it.
using (var mutex = new Mutex(false, "nameOfTheMutex"))
{
// Wait a few seconds in case there is another
// application in the process of shutting down.
if (!mutex.WaitOne(TimeSpan.FromSeconds(3), false))
{
Console.WriteLine("Another instance is running.");
return;
}
RunMyApp();
}
}
void RunMyApp()
{
Console.WriteLine("Running. Press Enter to exit");
Console.ReadLine();
}
}
[/code]
Semaphores or in other words traffic lights.
They have a limited capacity defined by user, once full everyone has to wait outside until someone leaves.
They’re particularly useful when one wants to limit the maximum number of threads that can execute a piece of code at the same time.
Like mutex, a semaphore can span across multiple processes by naming it and thus limiting the number of applications that can run in parallel.
Below you can find an example for semaphore usage:
[code lang=”csharp”]
class Test
{
// define a capacity of three
private static Semaphore _semaphore = new Semaphore(0, 3);
void Main()
{
for (int i = 1; i <= 5; i++) new Thread(TestThread).Start(i);
}
void TestThread(object id)
{
Console.WriteLine(id + " tries to get in");
_semaphore.WaitOne();
Console.WriteLine(id + " is in!");
Thread.Sleep(1000 * (int)id);
Console.WriteLine(id + " leaves");
_semaphore.Release();
}
}
[/code]
That’s all, see you on the next posts on Thread Safety and Concurrent Collections!
by Emanuel Scirlet | Nov 1, 2015 | .NET, C#
This is a continuation of threading in C# covered here. In this article I am going to cover more usages of threading, particularly timers and UI threading.
Multi-threaded Timers live in System.Threading.Timer and System.Timers and they represent one of the most common use case for repeatedly executing an action such as polling information and checking for a resource change. However the latter one is a wrapper around the first one and I’ll cover only the second one.
[code lang=”csharp”]
Timer timer = new Timer();
// define interval of execution
var interval = TimeSpan.FromSeconds(5);
timer.Interval = interval;
// assign a method to the elapsed event
timer.Elapsed += PerformAction;
timer.Start(); // Start the timer
timer.Stop(); // Stop the timer
timer.Dispose(); // Permanently stop the timer
// this is fired every 5 seconds
void PerformAction(object sender, System.Timers.ElapsedEventArgs e){
// example of updating WPF UI controls from another thread
Action action = () => txtMessage.Text = message;
Dispatcher.Invoke (action);
}
[/code]
Such timers use the thread pool to allow for only a few threads to be used for many timers and thus avoiding wasting threads. That means that the event can be fired on a different thread every time and because of that they will fire regardless if the previous callback finished or not and that is dangerous. Why? Because dead locks can happen and then multiple event firings can and will ultimately crash your application by using all threads available. I’ve seen this happening therefore those event handlers must be thread safe.
Those timers are fairly precise within a 10-20ms range depending on OS, system load.
There is a catch with this one though: you can not update controls directly, whether they are Windows Forms control or WPF controls and you’ll have to use Control.Invoke and Dispatcher.Invoke respectively.
Single-threaded timers live in System.Windows.Forms.Timer and System.Windows.Threading.DispatcherTimer namespaces and their main purpose is to get rid of the thread safe issue. They fire on the same thread as the one that created them which is basically your application. Beside that there are other advantages such as you don’t have to worry about using Control.Invoke or Dispatcher.Invoke in order to update your UI elements as those live on the same thread as you UI and they won’t fire until previous tick completed. That being said there is one huge downside, that is performing heavy work in the timer tick will render your application ultimately unresponsive. This makes them suitable only for small jobs such as counters or things like that.
Speaking of UIs you almost have no choice but using threading and the best practice there is to use one(or a few) UI threads and delegate any heavy lifting on background threads which at the end update the UI controls with the relevant data.
That’s all for now, see you on next article on dead locking, concurrency and concurrent collections. I have the feeling it’s going to be a lengthy one. Stay safe, stay thread safe!
by Emanuel Scirlet | Oct 18, 2015 | .NET, C#
I’ve been asked so many times to address this topic but as you may already know it’s a bit complex to say the least and needs to be covered extensively since you can find use cases roughly everywhere. Threads are usually managed by a thread scheduler that ensures that all active threads are given CPU time and the threads that are waiting or blocked (for instance waiting for a resource to be freed or waiting for user input) do not consume any CPU time.
Typical use cases are those when you want to do multiple things at the same time, such as time expensive IO operations(download file, write in a file, read from database etc) or image processing etc and maintaining the rest of your application responsive. For example UI applications or servers are great examples for heavy thread usage. Now, as good as threads may be they can pose serious problems such as locking and memory leaks especially when used incorrectly. As I said earlier one of the most common issue is data sharing between threads or in other words how do the threads access a common resource without ending up in a deadlock. A deadlock is a situation where two(or more threads) are waiting for the other(s) to finish it’s job with the resource and as such it never does. So you have to be really careful when you’re sharing data between threads typically using a lock mechanism such as monitors, mutex or a semaphore to prevent that from happening. As for collections, C# provides you with thread safe collections: queues, stacks, dictionaries, you name it which all reside under
System.Collections.Concurrent namespace which it is a topic by itself that I’ll cover some other time.
Relevant namespaces for working with threads are System.Threading and System.Threading.Tasks. The first one is the old way of dealing with threading and it has been a while from .NET framework 2.0 while the latter one has been around only from .NET 4.0 and it somewhat simplifies threading. And then there is Task Parallel Library which is now the standard way.
Thread pool recycles threads and manages the number of threads that can be run in total such that once the maximum limit is reached, the jobs are queued and executed once a thread is freed. The Task Asynchronous Pattern(TAP) is an advanced and rather new mechanism that uses threads very efficiently and thus making applications much faster.
Using TPL(Task Parallel Library) starting new threads is extremely easy and resumes to a call like the one bellow:
[code lang=”csharp”]Task.Factory.StartNew(() => { Console.WriteLine("Hello"); });[/code]
Ofcourse I made use of lambdas there but basically you can use any delegate or method for that matter. Sure, you may ask yourself what if the method signature is not void, what if I want to get back some result. Sure you can do that:
[code lang=”csharp”]
// The <string> type argument for Task.Factory.StartNew
// is not necessary as it is inferred and you can remove it.
// Start the task executing:
Task<string> task = Task.Factory.StartNew<string>(() => GetMyCustomer());
// We can do other work here and it will execute in parallel:
RunSomethingElse();
// When we need the task’s return value, we query its Result property:
// If it’s still executing, the current thread will now block (wait)
// until the task finishes:
string customer = task.Result;
[/code]
Any unhandled exceptions are automatically rethrown when you query the task’s Result property and are wrapped in an AggregateException. However, if you fail to query its Result property (and don’t call Wait on the task) any unhandled exception will take the process down. You can find an example on how to handle such situations bellow:
[code lang=”csharp”]
Task<string> task = Task.Factory.StartNew(() => GetMyCustomer());
try
{
task.Wait();
}
catch (AggregateException aex)
{
foreach (Exception ex in aex.InnerExceptions)
Console.WriteLine(ex.Message);
}
[/code]
Alright, so you’ve learned some basics about threading in C# will continue this series as following:
Part II – More usages of threading(Timers, Dispatchers)
Part III – Concurrency, Concurrent Collections
Part IV – Task Asynchronous Pattern(TAP)
by Emanuel Scirlet | Sep 28, 2015 | .NET, C#
It’s been a while since I last posted and it’s not because I abandoned the blog, it’s because I had other pressing matters to deal with. But now I have more free time at hand I will try to post regularly.
In all this time Microsoft has been busy and pivoted towards an open source world and as part of that, the .NET Foundation initiative became the main driver. Part of that is .NET core and includes Roslyn(the new C# compiler), the new JIT (RyuJIT), GC and a lot of .NET types became open source along with other cool stuff which you cancheck out on the .NET foundation page.
To give you a more broad idea let me explain this to you on the picture below.
Roslyn is the new open-source compiler for .NET, fully writen in C# as opposed to the previous compiler which was written in C++. I’m going to do a future post to demonstrate the power it now has.
.NET Framework has reached now version 4.6 and is still targeting Windows for the time being and on top of that lies WPF and Windows Forms used for UI applications as well as ASP.NET.
.NET core however is .NET Framework’s brother which was borned as open source and cross-platform. It is build as modular which means you can decide which components you can opt-in or out. However this is only a subset of the .NET Framework and for those of you that may be wondering, Microsoft has no plans on open-sourcing WPF. The current roadmap targets the .NET core to be ready for production as soon as Q1 2016 which is great.
Now, with all that open source around, Mono was also doing some open-sourcing of its own but the progress was also somewhat slow. But now, Mono has released two new versions to incorporate all these open-source stuff into their code and now Mono has support for C# 6.0, follows Roslyn compiler path, a lot of Mono previous implementation of collections, data types, LINQ, Threading have been entirely replaced with Microsoft’s code while things like WebRequests, Encodings and XML Serializer are in the process of being done and SqlClient is on the roadmap. So mono had a massive ram up in functionality , code quality, stability and targets by default .NET Framework 4.5 and I truly believe that mono can now serve as a serious platform for development. And it’s only going to get better and better because Xamarin’s team efforts are doubled by Microsoft’s effort to open source more and more of the .NET Framework.
I am very excited about this and I will keep you informed about what is going on and what’s the roadmap there.