left-icon

.NET Core Succinctly®
by Giancarlo Lelli

Previous
Chapter

of
A
A
A

CHAPTER 1

Introduction

Introduction


Thanks to the preface in the previous chapter, we may have come to realize why .NET Core was needed, and how .NET Core is positioned inside the bigger picture. Without further ado, we are now ready to talk about .NET Core.

The bigger picture

The 2016.NET Panorama

Figure 1: The 2016.NET Panorama

As you can see in Figure 1, the .NET panorama is rich in app models that are clustered inside a specific, let’s say, technology stack. Not considering Xamarin (which by itself would require another book), we are left with two main clusters: the full .NET Framework and .NET Core.

They both sit on top the .NET Standard Library and share a common infrastructure that includes all the technologies that power the compiler, the languages, and the runtime components. ASP.NET, per se, is present in both clusters; however, there is a substantial difference between the two—the version, or maybe the type, of .NET Framework the runtime uses.

ASP.NET Core runs using the cross-platform version of the .NET Framework, which is the subject of this book, while ASP.NET uses the full .NET Framework. If you’d like to know more about this, StackOverflow is a good place to start. Let’s now analyze the section that refers to the common infrastructure.

.NET Core Runtime (CoreCLR)

The runtime implementation of .NET Core is called CoreCLR. The CoreCLR is a unique term that groups together all the technologies that are fundamental to the runtime, such as RyuJIT, the .NET Garbage Collector, native interop, and many other components. The CoreCLR is cross-platform, with multiple OS and CPU ports in progress. Since it’s open sourced, you can find the official repo here.

The Just-In-Time compiler: RyuJIT

We find ourselves in a place in time where we can have (virtually) unlimited computing power and resources. Not long ago, RAM was relatively expensive, and that was somehow fine since the x86 architecture was living its golden days. However, as time has passed, the prices of RAM have decreased, allowing the majority of computers (and in recent years, also smartphones and IoT boards) to adopt the x64 bit architecture. Thanks to its long addresses, a x64 architecture can index a probably infinite amount of RAM. The rapid growth of the x64 architecture was somehow an edge case. Needless to say, this tiny detail had a repercussion on the .NET Framework. In fact, the .NET x64 Jitter was originally designed to produce very efficient code throughout the long run of a server process, while the .NET x86 JIT was optimized to produce code quickly so that the program starts up fast. The line between client and server got a little blurred, and the current implementation of the JIT needed a little tweak.

But why do we need a JIT compiler? And why does it need to be efficient? We need a JIT compiler because before we can run any Microsoft intermediate language (MSIL) assembly, we must first compile it against the common language runtime to native code for the target machine architecture. The more efficient this compilation process is, the faster and optimized are our assemblies.

With this is mind, the .NET code generation team worked really hard, and eventually came up with a next-generation x64 compiler, codenamed RyuJIT.

RyuJIT is the next generation Just-In-Time (JIT) compiler for .NET. It uses a high-performance JIT architecture, focused on high throughput, JIT compilation. It’s much faster than the existing JIT64 64-bit JIT that has been used for the last 10 years (introduced in the 2005 .NET 2.0 release). There was always a big gap in throughput between the 32- and 64-bit JITs. RyuJIT is similarly integrated into .NET Core as the 64-bit JIT. This new JIT is twice as fast, meaning apps compiled with RyuJIT start up to 30 percent faster. RyuJIT is based off of the same codebase as the x86 JIT, and in the future it will be the basis of all of Microsoft’s JIT: x86, ARM, MDIL, and whatever else comes along. Having a single codebase means that .NET programs are more consistent between architectures, and it’s much easier to introduce new features.

The new .NET Compiler Platform - Roslyn

Since the beginning of time, we’ve considered compilers as black boxes, a mere piece of the toolchain that transforms code into something that can be executed (and hopefully, that works as expected). This way of picturing a compiler was fine in the past, but it’s no longer suitable for modern days. If you’re a .NET developer, you probably know of Visual Studio features like Go to Definition, Smart Rename, and so on. These are all powerful refactoring and code analysis tools that help us on a daily basis to improve the quality of our code. As these tools get smarter, they need access to more and more of the deep code knowledge that only compilers possess.

Thanks to Roslyn (the codename for the new .NET Compiler Platform), we can leverage in an “As-a-Service” fashion a set of APIs that are able to communicate directly with the compiler, allowing tools and end users to share in the wealth of information compilers have about our code. The transition to compilers as platforms dramatically lowers the barrier to entry for creating code-focused tools and applications.

Roslyn’s API layers

Roslyn consists of two main layers of APIs: the Compiler APIs and Workspaces APIs. The compiler layer contains the object models that correspond with information exposed at each phase of the compiler pipeline, both syntactic and semantic. The compiler layer also contains an immutable snapshot of a single invocation of a compiler, including assembly references, compiler options, and source code files. The Workspaces layer contains the Workspace API, which is the starting point for doing code analysis and refactoring over entire solutions. This layer has no dependencies on Visual Studio components. In fact, even Visual Studio Code, the free and X-Platform Visual Studio-like IDE, uses Roslyn to provide a rich development experience to C# developers.

Note: If you want to learn more about Visual Studio Code, you can visit the official website. You can also download the free e-book about Visual Studio Code from Syncfusion’s Succinctly series.

.NET Native

.NET Native is a pre-compilation technology for building modern apps in Visual Studio 2015. The .NET Native toolchain will compile your managed IL binaries into native binaries. Every managed (C# or VB) Universal Windows app will utilize this new technology.

For users of your apps, .NET Native offers these advantages:

  • Fast execution times
  • Consistently speedy startup times
  • Low deployment and update costs
  • Optimized app memory usage

.NET Native is able to bring the performance benefits of C++ to managed code developers because it uses the same or similar tools as C++ under the hood.

What is .NET Core?

.NET Core 1.0 is a new runtime, modular and enriched with a subset of the API part of the .NET Framework. We have a feature-complete product on Windows, while on the other platforms (Linux and OSX), there are still features under development. .NET Core 1.0 can be divided into two major sections: one called CoreFX, which consists of a small set of libraries, and one called CoreCLR, and a small and optimized runtime.

NET Core is one of Microsoft’s projects under the stewardship of the .NET Foundation, meaning that it’s open source, and we can all contribute to it and follow its progress. Later in the book we’ll dive into the requirements that we have to meet if we want to contribute to the project.

In pursuit of modularity, Microsoft chose to distribute the CoreCLR runtime and the CoreFX libraries on NuGet, factoring them as individual NuGet packages. The packages are named after their namespace in order to facilitate their discovery during search.

One of the key benefits of .NET Core is its portability. You can package and deploy the CoreCLR with your application, eliminating your application’s dependency on an installed version of .NET. You can host multiple applications side-by-side using different versions of the CoreCLR, and upgrade them individually, rather than being forced to upgrade all of them simultaneously. CoreFX has been built as a componentized set of libraries, each requiring the minimum set of library dependencies. This approach enables minimal distributions of CoreFX libraries (just the ones you need) within an application, alongside CoreCLR.

Note: .NET Core is not intended to be smaller than the .NET Framework, but thanks to this modular-focused model, it allows applications to depend only on libraries they need, hence allowing for a smaller memory footprint..

A diagram that summarizes .NET Core

Figure 2: A diagram that summarizes .NET Core

Figure 2 expands the one we saw before by adding a column that lists the tools we as developers have at our disposal to consume the different frameworks. In regards to .NET Core, we can see that besides ASP.NET Core, it includes another workload, the Universal Windows Platform. Previously I mentioned the .NET Standard Library, but I’ve basically skipped any sort of clarification of what it actually is. It’s an important topic that’s very hard to summarize and simplify. If you’d like to read more about it, head over to the official documentation on GitHub.

What about the full .NET Framework?

If you’ve read this far, and you’re a bit paranoid like me, you have certainly wondered, “Where will the full .NET Framework end up after all this NuGet-Cross-Modular extravaganza?” Fear not—the.NET Framework is still the platform of choice for building rich desktop applications, and .NET Core doesn’t change that.

However, now that Visual Studio 2015 is out, .NET Core will version faster than the full framework, meaning that sometimes some features will only be available on .NET Core-based platforms. The full framework will still be updated constantly, bringing in, when possible, the innovative concepts and features of .NET Core.

Of course, the team’s goal would be to minimize API and behavioral differences between the two, but also not to break compatibility with existing .NET Framework applications. There are also investments that are exclusively being made for the .NET Framework, such as the work the team announced in the WPF Roadmap.

The role of Mono

For those who don’t know Mono, it’s essentially an open source re-implementation of the .NET Framework. As such, it shares the richness of the APIs with the .NET Framework, but it also shares some of its problems, specifically around the implementation factoring. Another way to look at it: The .NET Framework has essentially two forks. One fork is provided by Microsoft and is Windows-only. The other fork is Mono, which you can use on Linux and Mac.

Mono won’t be discussed in detail during the course of this e-book, but whenever possible, I’ll try to point you to some useful resource that might help you learn more about it.


Scroll To Top
Disclaimer
DISCLAIMER: Web reader is currently in beta. Please report any issues through our support system. PDF and Kindle format files are also available for download.

Previous

Next



You are one step away from downloading ebooks from the Succinctly® series premier collection!
A confirmation has been sent to your email address. Please check and confirm your email subscription to complete the download.