Forever F[r]ame http://foreverframe.pl ...o web devie słów kilka Tue, 13 Dec 2016 16:25:32 +0000 pl-PL hourly 1 http://wordpress.org/?v=4.3.1 Using C# explicit interface implementation for hiding… http://foreverframe.pl/using-c-explicit-interface-implementation-for-hiding/ http://foreverframe.pl/using-c-explicit-interface-implementation-for-hiding/#comments Mon, 12 Dec 2016 20:09:58 +0000 http://foreverframe.pl/?p=1435 One thing that surprised me the most about the C# was the fact that it does not support multiple inheritances (which I knew from the C++). How can we deal with that? Of course,...

Artykuł Using C# explicit interface implementation for hiding… pochodzi z serwisu Forever F[r]ame.

]]>
One thing that surprised me the most about the C# was the fact that it does not support multiple inheritances (which I knew from the C++). How can we deal with that? Of course, we use interfaces, since one class can implement more than one. But quickly we come to the very common problem:

 


class Test : ITestOne, ITestTwo
{
    void Print() // ERROR
    {
        Console.WriteLine($"I belong to the {nameof(ITestOne)} inteface!");
    }

    void Print() // ERROR
    {
        Console.WriteLine($"I belong to the {nameof(ITestTwo)} inteface!");
    }
}

interface ITestOne
{
    void Print();
}

interface ITestTwo
{
    void Print();
}

 

Why is that confusing? Well, in a perfect world compilers would just read both strings and decide (like humans do) which method stands for the proper interface. But it’s not that easy and we need to help in order to make it work. So, what is the solution? Since Test has to implement both of the interfaces we need to implement their methods explicitly. The example code is given below:

 


class Test : ITestOne, ITestTwo
{
    void ITestOne.Print() // Ok
    {
        Console.WriteLine($"I belong to the {nameof(ITestOne)} inteface!");
    }

    void ITestTwo.Print() // Ok
    {
        Console.WriteLine($"I belong to the {nameof(ITestTwo)} inteface!");
    }
}

interface ITestOne
{
    void Print();
}

interface ITestTwo
{
    void Print();
}

 

That code does not confuse the compiler anymore since we disambiguated methods by telling directly which implementation belongs to the proper interface. That’s the most common usage of the explicit implementations. But, there’s one more that might interest you in some cases…

 

Hiding interface’s members

As you probably all know, declaring methods/properties in the interface does not allow us to implement that as non-public. That seems logical since interface defines set of features that objects need to have. But there are some scenarios where having that kind of possibility would be handy. Let’s take a look at the code below:

 


public interface ISoftDeletable
{
    bool IsActive { get; }
    void Delete();
}

public abstract class InternalEntity : IInternalEntity<int>, ISoftDeletable, IAuditable
{
    [Key]
    public int Id { get; set; }

    public DateTime CreatedDate { get; set; }

    public DateTime UpdatedDate { get; set; }

    public bool IsActive { get; private set; }

    protected InternalEntity()
    {
        this.IsActive = true;
    }

    void ISoftDeletable.Delete()
    {
        this.IsActive = false;
    }
}

 

That’s the listing from my project which I was doing back in the February for the Polish programming competition. What’s the point? Well, in this case, I defined an interface for the soft delete operation but I implemented its method explicitly. Why?

 

  • I wanted to keep it private (explicit method cannot be public) and invoke the Delete method through the repository class, so the flow will be more controlled.
  • I wanted to be sure that every class which will implement ISoftDeletable interface will also have the implementation of the Delete method (I just don’t trust my memory).

 

Explicit implementation gave me both – (kinda) private and obligatory methods inside the class. Now, it worth to write down some comment about invoking that kind of methods. Here’s the code from the repository:

 


public void Delete(TEntity entity)
{
    var softDeletableEntity = entity as ISoftDeletable;

    if(softDeletableEntity == null)
        throw new NotSupportedException("Entity must implement ISoftDeletable interface");

    softDeletableEntity.Delete();
}

 

As long as we won’t cast the class to the ISoftDeletable type, the Delete method won’t be visible. After using the as keyword we can access that like any other class member. Of course, there are any usages of the explicit interface implementation like backward compatibility but this one seemed to me to be the most useful.

 

Artykuł Using C# explicit interface implementation for hiding… pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/using-c-explicit-interface-implementation-for-hiding/feed/ 3
What are covariance and contravariance in C#? http://foreverframe.pl/what-are-covariance-and-contravariance-in-c/ http://foreverframe.pl/what-are-covariance-and-contravariance-in-c/#comments Mon, 05 Dec 2016 21:34:35 +0000 http://foreverframe.pl/?p=1429 Sometimes it happens that we use mechanisms/features of specific language without beeing aware of it. That’s fine, but if you’ll want to discuss your code in a future to the audience or coworkers, soon...

Artykuł What are covariance and contravariance in C#? pochodzi z serwisu Forever F[r]ame.

]]>
Sometimes it happens that we use mechanisms/features of specific language without beeing aware of it. That’s fine, but if you’ll want to discuss your code in a future to the audience or coworkers, soon or later you’ll be forced to learn it and understand (or at least name it using technical nomenclature). Therefore, today I’m going to discuss two related „mechanisms” of C# which are covariance and contravariance.

 

Covariance

Covariance is a type conversion from the specific type to the more general (base). Here’s the simple example:

 

class Shape { }

class Rectangle : Shape { }

class Square : Rectangle { }

class Test
{
    void TestCovariance()
    {
        Rectangle recatngle1 = new Square(); // Compiles
        Rectangle recatngle2 = new Rectangle(); // Compiles
        Rectangle rectangle3 = new Shape(); //Error
    }

}

 

This looks pretty obvious:

  • Not every shape must be a rectangle
  • Every rectangle is a rectangle
  • Every Square is also a rectangle

 

In this case, we can say that Square class is covariant to the Rectangle class. In C# all values returned from the methods are covariant. Therefore, this code would compile:

 


class Test
{
    void TestMethodReturnedValuesCovariance()
    {
        Rectangle rectangle = GetRectangle();
        Shape shape = GetRectangle();
    }

    Rectangle GetRectangle()
    {
        return new Rectangle();
    }

}

 

This one might be really handy in your code, but in some cases, covariance may be the cause of a tricky exception. Why tricky? Let’s discuss the following implementation:

 


public void TestArrayVariance()
{            
    string[] stringArray = new string[2];
    object[] objectArray = stringArray;

    objectArray[1] = new Guid();
}


 

So, in this case, we signed string array to the object array (due to the fact that string inherits from object class). As you can see it’s almost identical example to the one from the beginning of this article. Because in C# we can assign almost everything to the object, we used that to insert to our array new Guid. Guess what? This code compiles! But wait! Don’t event think that we can create a „multi-type” structure because of covariance. Here’s what happens when we’d run this:

 

exc

 

As presented, an exception has been thrown in the runtime, so it’s not that easy 😉 But be aware of that kind of mistakes when using covariance.

 

Contravariance

As most of you probably guess, contravariance is the opposite of the previous mechanism. It’s a type conversion from the general to the more specific. In C# all methods parameters are contravariant. Therefore compilation of the following code would finish with presented results:

 


class Test
{
    void TestMethodParamsContravariance()
    {
        SomeMethod(new Shape()); //Error
        SomeMethod(new Rectangle()); //Compiles
        SomeMethod(new Square()); //Compiles
    }

    public void SomeMethod(Rectangle rectangle)
    {

    }
}

 

Covariance and Contravariance in generics

So far, both covariance and contravariance seem to be kinda helpful in many cases. But we come to the another question. Does generics support them? The answer is… YES! Let’s take a look at the example below:

 


interface IVariance<T> {}

class Covariant
{
    public void Test()
    {
        IVariance<Shape> shape = GetRectangle(); // Error
        IVariance<Rectangle> rectangle = GetRectangle(); // Compiles
        IVariance<Square> square = GetRectangle(); // Error
    }

    IVariance<Rectangle> GetRectangle()
    {
        return null;
    }
}

 

In the presented case only one declaration is correct. That’s because by default generic parameters are invariant which means nothing more than „give me exactly that type, I’m not interested in any other”. However, we can change that easily. Let’s start with covariance:

 


interface IVariance<out T> {}

class Covariant
{
    public void Test()
    {
        IVariance<Shape> shape = GetRectangle(); // Compiles
        IVariance<Rectangle> rectangle = GetRectangle(); // Compiles
        IVariance<Square> square = GetRectangle(); // Error
    }

    IVariance<Rectangle> GetRectangle()
    {
        return null;
    }
}


 

All we did was adding out keyword before T. That tells the C# compiler that beyond T-type, the IVariance interface accepts also more general type. It’s also worth noting that in that example the out keyword is not related to its original meaning (I mean out parameters). But for me, it’s a good naming choice since it’s quite easy to associate it with the covariance mechanism (out – specific type and all above the hierarchy). Let’s move forward to the contravariance usage in generics:

 


interface IVariance<in T> {}

class Covariant
{
    public void Test()
    {
        IVariance<Shape> shape = GetRectangle(); // Error
        IVariance<Rectangle> rectangle = GetRectangle(); // Compiles
        IVariance<Square> square = GetRectangle(); // Compiles
    }

    IVariance<Rectangle> GetRectangle()
    {
        return null;
    }
}


 

Instead of out keyword, we used in. Once again we can associate that pretty easily (in – specific type and all below the hierarchy).

 

Summary

The last question is: do we really need it? YES! Using mentioned mechanism saves a lot of time and code. If you want to take a look at the more „practical” example go read my CQRS/ES series where I combined contravariance and reflection to implement event sourcing inside the aggregate root. Here’s the link :) I know that for most of you that could be nothing special, but I know that many developers use it non-consciously. So even if you knew it, now you can also name it – like a pro! As always, I encourage you to follow me on Twitter and Facebook so you’ll be able to read all upcomming posts!

Artykuł What are covariance and contravariance in C#? pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/what-are-covariance-and-contravariance-in-c/feed/ 2
Using Interceptors with aurelia-fetch-client http://foreverframe.pl/using-interceptors-with-aurelia-fetch-client/ http://foreverframe.pl/using-interceptors-with-aurelia-fetch-client/#comments Sun, 27 Nov 2016 22:45:11 +0000 http://foreverframe.pl/?p=1419 In today’s post, we are going to explore another feature of the Aurelia framework or more precisely aurelia-fetch-client. So, in many cases, it would be nice to perform some specific actions when doing an...

Artykuł Using Interceptors with aurelia-fetch-client pochodzi z serwisu Forever F[r]ame.

]]>
In today’s post, we are going to explore another feature of the Aurelia framework or more precisely aurelia-fetch-client. So, in many cases, it would be nice to perform some specific actions when doing an AJAX request. For instance, before sending a request we want to log it into browser database like PouchDb. A more common example is calling a toastr on some error like 404, 500, 401. The problem with such a scenarios is that many developers duplicate their code in every single request/callback so they break the DRY rule (which stands for don’t repeat yourself). Fortunately aurelia-fetch-client provides easy solution – Interceptors.

 

Exploring Interceptors

In order to register our own interceptor, we need to call the configure function of the HttpClient instance. Then inside that we have an access to the withInterceptor method of the HttpClientConfiguration object. If you’re confused here’s the implementation:

 


httpClient.configure(config =>
{
    config.withInterceptor(); //parms: interceptor: Interceptor
});

 

Now, as you can see the function accepts the object which implements the Interceptor interface. Let’s click F12 and navigate to the *.d.ts file:

 


export declare interface Interceptor
{
    request?: (request: Request) => Request | Response | Promise<Request | Response>;

    requestError?: (error: any) => Request | Response | Promise<Request | Response>;

    response?: (response: Response, request?: Request) => Response | Promise<Response>;

    responseError?: (error: any, request?: Request) => Response | Promise<Response>;
}

 

So basically we have four functions here. It’s worthwhile to mention that all of them are optional in the derived types (the ? operator usage). Let’s discuss each of the functions (I’ll paste the original comments removed from the code due to its simplicity):

 

  • request – Called with the request before it is sent. Request interceptors can modify and return the request, or return a new one to be sent. If desired, the interceptor may return a Response in order to short-circuit the HTTP request itself.
  • requestError – Handles errors generated by previous request interceptors. This function acts as a Promise rejection handler. It may rethrow the error to propagate the failure, or return a new Request or Response to recover.
  • response – Called with the response after it is received. Response interceptors can modify and return the Response, or create a new one to be returned to the caller.
  • responseError – Handles fetch errors and errors generated by previous interceptors. This function acts as a Promise rejection handler. It may rethrow the error to propagate the failure, or return a new Response to recover.

 

As presented, we’ve got a set of functions which should handle most of our specific scenarios related to the AJAX calls. Let’s implement some simple example, so we can check whether it’s working.

 

The implementation

So, in our simple example, we’re going to implement only the request and responseError functions. Our first step is to create a class which implements discussed interface. The implementation looks as follows:

 


export class SimpleInterceptor implements Interceptor
{
    request(request: Request)
    {
        console.log(`I am inside of the interceptor doing a new request to ${request.url}`);
        return request;
    }

    responseError(response: Response)
    {
        console.log('Some error has occured! Run!')
        return response;
    }
}

 

The code is bloody simple therefore, I’ll not comment it. Now we need to pass the new instance of our SimpleInterceptor class to the withInterceptor function:

 


httpClient.configure(config =>
{
    config.withInterceptor(new SimpleInterceptor());
});

 

Now, I’m going to make a GET request to my blog (http://foreverframe.pl). Here’s the result in the console:

 

Screen Shot 2016-11-27 at 23.32.14 (2)

 

Everything works as expected :)

 

Summary

The above example is trivial but I don’t see a point of doing something more complicated since in most cases the problems are „project specific”. All you should remember is the fact that Aurelia’s fetch client provides this solution which might be handy in the future 😉 That’s all for today. As always I encourage you to follow me on Twitter and Facebook so you can be up to date with new post :)

Artykuł Using Interceptors with aurelia-fetch-client pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/using-interceptors-with-aurelia-fetch-client/feed/ 3
Mac OS X for .NET developer? http://foreverframe.pl/mac-os-x-for-net-developer/ http://foreverframe.pl/mac-os-x-for-net-developer/#comments Mon, 21 Nov 2016 21:31:03 +0000 http://foreverframe.pl/?p=1394 I’ve been using Windows OS since I remember. I guess the first version I used was ’95. Why Windows? Well mostly because of simplicity. When I wanted to install some application, all I needed...

Artykuł Mac OS X for .NET developer? pochodzi z serwisu Forever F[r]ame.

]]>
I’ve been using Windows OS since I remember. I guess the first version I used was ’95. Why Windows? Well mostly because of simplicity. When I wanted to install some application, all I needed to do was to download that and run the .exe. No struggling with some weird „sudo” like commands that I didn’t understand. It just worked fine. Things changed when I went to university and discovered Debian which is a Linux distribution. My first moments with UNIX family wasn’t very enjoyable but after few days I understood why there are so many „penguin freaks”. I loved that. But at that time I started my professional career as a ASP.NET developer, so I had to give that up since .NET ran only on the Windows. Almost three years passed since I could think about OS switch again. That was because Microsoft presented a new version of the .NET called Core. I thought that was awesome! Nothing kept me to Windows! Well, sort of… there was still Visual Studio. I heard about the project called OmniSharp but I wasn’t sure if it would work in a way I would expect. Couple days ago I had an opportunity to buy a MacBook Pro for the very good price (thanks, dad :) ) so I had to make a decision. Switch to UNIX which might be a risk or stay with good old Windows for next few years. I guess you know what I did…

 

 

Setting up frontend stuff

I started the whole setup with frontend stuff (I’m a full stack). The first thing I had to choose was a code editor. There are many of them:

  • Atom
  • Brackets
  • Sublime Text
  • Vim
  • Visual Studio Code

 

Okay, I lied to you. There was no choice here, I got the Visual Studio Code because I love that editor :) It’s really intuitive and works like a charm. I used to work with Atom but it crashed almost every day so I just could stand that anymore. Oh, and I had an episode with the Vim. No comment here, I couldn’t even exit to the terminal. I’m just too dumb for that. Having a VSC I needed to got couple extensions to make my work easier. Here’re these that I use:    

 

extensions

 

There are two extensions for the Aurelia framework (if you’ve never heard about that go read my another article :) ), mentioned OmniSharp, syntax highlighting for the docker files and extension for the custom icons. If you’re looking for more awesome extensions, I recommend reading this article posted by Scott Hanselman. After setting up VSC, I needed to take care about the console! On Windows, I used ConEnemu (which offers a lot of functionalities such as tabs and search box) but Mac OS X has its great, stock terminal with UNIX commands, so I could move forward. I had to install basic stuff needed for the web developing. Here’s the list:

  • npm (requires node.js)
  • gulp
  • bower
  • yeoman
  • aurelia-cli
  • typescript

 

Backend stuff

So far so good I thought. I needed to finally face with .NET Core on the UNIX. My first move was just installing it. I was afraid to make that incorrectly, but surprisingly the steps on the official .NET Core side were very clear and easy to follow. I took me about 5 minutes to install that without any trouble. Nice! And finally, it had to face the choice I had thought many weeks before. What „tool” should I used for the coding in .NET Core?  I decided to compare three options:    

 

 

I’m not going to judge which one is the best since I’ve been using them for a very short time (but I’ll do my best to blog post about my experience and feelings with each of them by the end of the year). Even though I share some observations with you (for this particular moment):    

 

Screen Shot 2016-11-21 at 21.21.11

 

Visual Studio Code + OmniSharp is the fastest one and with custom icons, it’s easy to navigate inside the project. What really frustrates me is incomplete syntax highlighting. Come on guys, no highlight for List<T> or var inside a foreach loop? And it’s still code editor which means no templates and creating a new project from scratch.  

 

Screen Shot 2016-11-21 at 21.22.26

 

Visual Studio for Mac  feels completely different comparing to the Windows versions but for now it’s pretty good and stable IDE. I miss the dark theme and highlighting colors from the VS but it’s not that bad. Unfortunately, I noticed two problems with this IDE. Firstly, I could not open generated project (yo aspnet) by clicking on project.json since it treated it like a file, not a solution. Also, I noticed only two templates for .NET Core projects which are console application and empty web application.    

 

Screen Shot 2016-11-21 at 21.24.42

 

Project Rider is great IDE. I mean it just seems like a perfect combo of the two above options + it comes with Resharper.  UI is intuitive, dark theme and syntax highlighting look exactly like in the Visual Studio, R# improves my daily productivity and it’s pretty fast. It’s worthwhile mentioning that I hadn’t any trouble with opening generated ASP.NET Core project (by clicking at project.json file) and it offers a lot of .NET Core templates. The only disadvantage I can come up with is crushing which in my case happens about 5-6 times a day.    

 

Other stuff

I this paragraph I just want to go through other stuff that’s going to help me in my daily work. First of all, I needed some Web debugger. My personal favorite is Fiddler by Telerik. Fortunately, it’s possible to run it on the Mac using Mono Framework. The instruction is clear, so you should not have any trouble with that. The bad news is that experience with Mac version is not as good as on Windows but hey, at least it exists! I also downloaded a Postman desktop app, which is handy when it comes to sending some data in the request body (which is not that easy using a fiddler). Below I listed other apps that I got (not related to programming itself):

  • Docker
  • Google Chrome
  • Firefox for Developers
  • Slack
  • Twitter
  • Facebook Messanger
  • Telegram
  • Skype

 

                Screen Shot 2016-11-21 at 21.20.51

 

 

Summary

So, we come to the question from the first tweet. Do I regret? For this moment not at all. I mean with Project Rider I feel like it’s going to be a kinda familiar experience like on the Windows (at least I hope so). Time will verify my decision, but I think that we finally came to the point when we (as .NET developers) can decide which OS suits us the most. Core gave us an opportunity but it does not force to think about Linux or OS X. Don’t get me wrong. I love Windows! I really do, but I needed some alternative. What are your experience with .NET Core on UNIX? Share your opinions in the comments section below! As always, I encourage you to follow me on Twitter and Facebook just to be up to date with my new posts :) 

Artykuł Mac OS X for .NET developer? pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/mac-os-x-for-net-developer/feed/ 1
Exploring Aurelia’s route pipelines http://foreverframe.pl/exploring-aurelia-pipelines/ http://foreverframe.pl/exploring-aurelia-pipelines/#comments Sun, 13 Nov 2016 15:30:50 +0000 http://foreverframe.pl/?p=1374 When coding web applications we sometimes come to the point when we need to perform some action during user’s navigation. One of the most common examples would be authorization and checking whether the user...

Artykuł Exploring Aurelia’s route pipelines pochodzi z serwisu Forever F[r]ame.

]]>
When coding web applications we sometimes come to the point when we need to perform some action during user’s navigation. One of the most common examples would be authorization and checking whether the user has appropriate roles. Fortunately, Aurelia provides an access to route pipeline so it’s super easy to add some extra steps :) Let’s get to work!

 

Inspecting RouterConfiguration

In order to add our custom pipeline steps, we need to create some routing for the application. I won’t describe the whole process since it’s not directly related to the today’s topic but here’s some basic implementation got from the Aurelia Hub:

 


import {RouterConfiguration, Router} from 'aurelia-router';

export class App
{
    router: Router;

    configureRouter(config: RouterConfiguration, router: Router)
    {
        this.router = router;    

        config.map([
            {route: '', moduleId: ''}
        ]);
    }
}

 

As we can see one of the parameters used in the configure method is the config object (which is an instance of the RouterConfiguration class).
Let’s check its available methods:

 

sample1

 

As marked there are a couple of methods which might help us with creating custom pipeline steps. Therefore I’d like to discuss them in order from the earliest called:

 

  • addAuthorizeStep – adds a authorize step to the pipeline. The step is called between loading route’s step and calling the view-model’s canActivate method (if defined)
  • addPreActivateStep – adds preActivate step to the pipeline. This step is called between view-model’s canActivate method and the previous view-model’s deactivate method (if defined).
  • addPreRenderStep – adds preRender step to the pipeline. The step is called between view-model’s activate method but before the view is rendered.
  • addPostRenderStep – adds postRender step to the pipeline. The step is called after the view is rendered.

 

In addition to the above-mentioned methods, there is the fifth one called addPipelineStep. This one is a universal way to create chosen step which in addition to the step object requires also its name (which is just a string).

 

The implementation

Knowing the whole pipeline we can play with that a little. Let’s add all four steps to the application’s route. The implementation is given below:

 

import {RouterConfiguration, Router, NavigationInstruction, Next} from 'aurelia-router';

export class App {
    router: Router;

    configureRouter(config: RouterConfiguration, router: Router) {
        this.router = router;

        config.addAuthorizeStep(AuthorizeStep); //alternate way: config.addPipelineStep('authorize', AuthorizeStep);       
        config.addPreActivateStep(PreActivateStep); //alternate way: config.addPipelineStep('preActivate', AuthorizeStep); 
        config.addPreRenderStep(PreRenderStep); //alternate way: config.addPipelineStep('preRender', AuthorizeStep);   
        config.addPostRenderStep(PostRenderStep); //alternate way: config.addPipelineStep('postRender', AuthorizeStep);   

        config.map([
            { route: '', moduleId: 'books', }
        ]);
    }
}

export class AuthorizeStep {
    run(navigationInstruction: NavigationInstruction, next: Next): Promise<any> {
        console.log("I'm inside the authorize step!")
        return next();
    }
}

export class PreActivateStep {
    run(navigationInstruction: NavigationInstruction, next: Next): Promise<any> {
        console.log("I'm inside the pre activate step!")
        return next();
    }
}

export class PreRenderStep {
    run(navigationInstruction: NavigationInstruction, next: Next): Promise<any> {
        console.log("I'm inside the pre render step!")
        return next();
    }
}

export class PostRenderStep {
    run(navigationInstruction: NavigationInstruction, next: Next): Promise<any> {
        console.log("I'm inside the post render step!")
        return next();
    }
}

 

As you probably noticed, we didn’t need to implement any special interface inside each step class. All we have to deliver to aurelia is a run method which accepts two parameters of type NavigationInstruction and Next (got from the aurelia-router module). Inside the configureRouter method we registered all created steps using dedicated Add… methods, but I also presented to you the alternate ways of doing that. All right then, let’s run the app and check whether it’ working:

 

sample2

 

Okay, everything works but that was super easy, wasn’t it? How about something more useful? As I mentioned at the beginning of the article one of the most common scenario for that kind of functionalities would be a user’s authorization. Below is some code that can deal with such a requirement:

 

import {RouterConfiguration, Router,NavigationInstruction, Next, Redirect} from 'aurelia-router';

export class App
{
    router: Router;

    configureRouter(config: RouterConfiguration, router: Router)
    {
        this.router = router;

        config.addAuthorizeStep(AuthorizeStep);        

        config.map([
            {route: '', moduleId: 'books', settings: {roles: ['admin', 'superUser']}},
            {route: 'search', moduleId: 'search'}
        ]);
    }
}

export class AuthorizeStep
{
     run(navigationInstruction: NavigationInstruction, next: Next): Promise<any> 
     {  
         let user = {role: 'admin'};
         let requiredRoles = navigationInstruction.getAllInstructions().map(i => i.config.settings.roles)[0];

         let isUserPermited = requiredRoles? requiredRoles.some(r => r === user.role) : true;

         if(isUserPermited)  
            return next();

         return next.cancel();
     }
}

 

Let’s start with the route declaration. As you can see I added a parameter to the default module called settings. This allows putting some additional information on each route. In this case, I put the array with roles names, so if the user wants to redirect to the view, he needs to be in one of the defined roles. Now, moving forward to the run implementation, first, we have to get the user object. Normally I’d inject some AuthService or anything else that keeps the user’s object but that’s not the point of this example 😉 Having a user we need to get roles defined for the specific view. For this purpose, we can use the navigationInstruction object which gives us an access to the route’s settings. After that, all we need to do is checking whether the user is in one of the defined roles. If the condition is satisfied, we invoke next() to exit the authorization step and move forward in a pipeline. Otherwise, we reject the operation by invoking cancel method. It worthwhile to mention that this method has also an overload allowing us to redirect user to the specific view, like:

 

 return next.cancel(new Redirect('nameOfTheRoute')); 

 

Summary

That’s all I prepared for you today! I hope that this article will help you in the future while creating your awesome Aurelia apps! As I wrote earlier there are many scenarios that can be easily handled by the route pipelines (not only authorizing). The above example was just tip of the iceberg. As always, I encourage you to follow me on Twitter and Facebook just to be up to date with my new articles!

Artykuł Exploring Aurelia’s route pipelines pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/exploring-aurelia-pipelines/feed/ 1
Func vs. Expression http://foreverframe.pl/func-vs-expression/ http://foreverframe.pl/func-vs-expression/#comments Mon, 07 Nov 2016 21:14:31 +0000 http://foreverframe.pl/?p=1356 A few days ago I was playing with Entity Framework when suddenly an Exception popped out on the screen:   Additional information: LINQ to Entities does not recognize the method ‚System.String GetFullName(User)’ method and...

Artykuł Func<‌T> vs. Expression<‌Func<‌T>> pochodzi z serwisu Forever F[r]ame.

]]>
A few days ago I was playing with Entity Framework when suddenly an Exception popped out on the screen:

 

Additional information: LINQ to Entities does not recognize the method ‚System.String GetFullName(User)’ method and this method cannot be translated into a store expression.

 

That was because accidently I treated Expression<Func<T>> like if it was a Func<T>. Two minutes later the bug was fixed, and I was happy to move forward. But what I realized is the fact that many developers don’t get the difference between these two „beings, ” and they try to use them alternately without any explanation. Frankly, that’s kinda weird for me since it’s not that hard to understand. And I’ll do my best to proof that 😉

 

Exploring Func<T> and Expression<Func<T>>

Let’s us start with some simple code that should introduce us to the topic:

 

Func<int, int> pow = arg => arg * arg;

Expression<Func<int, int>> powExpression = arg => arg * arg;

 

These two lines look almost identical but believe me that they differ significantly. To notice that difference we need to print both objects. Te result looks as follows:

 

sample1

 

As you can read the first WritleLine method printed object type which in this case is Func<int, int>. By this, C# compiler says something like „Well, all I know is that you need some integer, and after execution, I’ll get back an integer.” That’s it. The compiler treats pow as a „black box” with some input and output. But it cannot look inside that until it needs that during runtime. It’s also worthwhile to mention that Func<T> is equivalent to the delegate. Here’s the example:

 

Func<int, int> powDelgate = delegate (int arg)
{
    return arg * arg;
};

 

Things look completely different when we come to the second line of the printed result. Surprisingly we’ve just got the expression – an observable recipe which describes the way to intensify a variable. That’s because by using the Expression<T> all code is stored in a tree-like data structure called an expression tree which allows the C# compiler to „play” with the that like if it were data. What’s even cooler is that Expression<T> has a Compile method that compiles the expression at runtime and generates Func<T>:

 

Expression<Func<int, int>> powExpression = arg => arg * arg;

Func<int, int> pow = powExpression.Compile();

 

That seems logical. Having a „recipe,” it’s easy to put it in a black box. But as you’ve probably guessed there’s no way back. Once you compile an expression, you won’t be able to reconstruct it using a func. Bearing in mind the fact that Func<T> doesn’t have information how to perform certain logic, we can guess why Entity Framework threw mentioned exception.

 

Playing with IQueryable<T>

The first thing we need to understand is the way that EF works. Like any other ORM (which stands for object-relational mapping) its primary task is to translate the object-specific code into the SQL. For this purpose, it uses the generic interface delivered by System.Linq namespace called IQueryable. Because LINQ is lazy, as long as we operate on this interface no round-trip to the database is performed, so we can easily create a particular query. All translation happens when we materialize IQueryable. The example is given below:

 

public void Test()
{
    var context = new Context();

    var qUsers = context.Users; //IQueryable

    var qFilteredUsers = qUsers.Where(u => u.Age > 30); //IQueryable

    var qOrderedFilteredUsers = qFilteredUsers.OrderBy(u => u.FirstName); //IQueryable

    var result = qOrderedFilteredUsers.ToList(); // Execute the SQL on the DB. I'm so lazy...
}

 

Now let’s take a look at the Where method:

 

sample2

 

There are four overloads available but focus on just two of them. We can choose between Func<User, bool> and Expression<Func<User, bool>>. Does it mean that I lied to you and we can use them alternately? Let’s execute the following code:

 

public void Test()
{
    var context = new Context();

    Func<User, bool> funcPredicate = user => user.Age > 30;

    Expression<Func<User, bool>> expressionPredicate = user => user.Age > 30;

    var funcResult = context.Users.Where(funcPredicate).Count();

    var expressionResult = context.Users.Where(expressionPredicate).Count();

    Console.WriteLine($"FUNC: {funcResult} | EXPRESSION: {expressionResult}");
}

 

Here’s the result:

 

sample3

 

Both queries returned the same result! But before you call me a moron let me present to you both SQL queries caught by the SQL Server Profiler:

 


//Expression<Func<User,bool>> predicate

SELECT
[GroupBy1].[A1] AS [C1]
FROM ( SELECT
COUNT(1) AS [A1]
FROM [dbo].[Users] AS [Extent1]
WHERE [Extent1].[Age] > 30
) AS [GroupBy1]

&nbsp;

//Func<User, bool> predicate
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[FirstName] AS [FirstName],
[Extent1].[LastName] AS [LastName],
[Extent1].[Age] AS [Age]
FROM [dbo].[Users] AS [Extent1]

 

What happened? Everything is clear when we navigate to the both Where overloads. The Expression<Func<T>> version operates on the IQueryable interface. And that seems natural. Linq to Entities could include the WHERE statement to our query since it had access to the expression tree. Using other words, it could read the recipe that said: „Hey you, give me only users older than 30!”. On the other hand, Func<T> version had to be implemented on top of the IEnumerable because all we can do with Func is to execute that. Since it’s impossible to translate it back to the expression, so Linq to Entities had to fetch all the data and then run the function for each element to count the number of the users. But that example didn’t throw an exception, right? Let’s move to the last one:

 

public class FuncTest
{
    public void Test()
    {
        var context = new Context();

        Expression<Func<User, ReadModel>> selector = user => new ReadModel
        {
            FullName = user.GetFullName()
        };

        var result = context.Users.Select(selector).ToList();
    }
}

public static class UserExtensions
{
    public static string GetFullName(this User user) => user.FirstName + " " + user.LastName;
}

public class ReadModel
{
    public string FullName { get; set; }
}

 

In this case, we’ll get the exception from the introduction. To be clear, the above code compiles but fails at runtime. Expression served here is correct, it takes an Entity and returns some ReadModel, but the problem lies in the fact that Linq to Entities doesn’t know how to translate the GetFullName method into SQL query since the expression tree does not represent it. What’s the solution? We need to move all method code inside the expression:

 


public class FuncTest
{
    public void Test()
    {
        var context = new Context();

        Expression<Func<User, ReadModel>> selector = user => new ReadModel
        {
            FullName = user.FirstName + " " + user.LastName
        };

        var result = context.Users.Select(selector).ToList();
    }
}

public class ReadModel
{
    public string FullName { get; set; }
}

 

That looks more complicated (especially when writing complicated query), but it’s way more efficient than fetching all the data from the DB.

 

Summary

I did my best to explain that topic, and hopefully, I’ll help you during your work. If you’ve got any questions related to that topic feel free to ask me in the comments section :) As always I encourage you to follow me on Twitter and  Facebook to be up to date with fresh posts!

Artykuł Func<‌T> vs. Expression<‌Func<‌T>> pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/func-vs-expression/feed/ 3
Generating view model and view using Aurelia CLI http://foreverframe.pl/generating-view-model-and-view-using-aurelia-cli/ http://foreverframe.pl/generating-view-model-and-view-using-aurelia-cli/#comments Thu, 27 Oct 2016 19:38:39 +0000 http://foreverframe.pl/?p=1342 In the previous post, I presented to you the Aurelia – new, great JavaScript framework created by Rob Eisenberg. We also used its Command Line Interface (CLI) to create new Aurelia project with all...

Artykuł Generating view model and view using Aurelia CLI pochodzi z serwisu Forever F[r]ame.

]]>
In the previous post, I presented to you the Aurelia – new, great JavaScript framework created by Rob Eisenberg. We also used its Command Line Interface (CLI) to create new Aurelia project with all its dependencies, unit tests, and HTTP server. As I announced back then, we’re going to play with CLI to generate some code inside our project. So, let’s get started!

Aurelia CLI generator

Okay, before our implementation, let’s find out what kind of code we can generate using Aurelia’s CLI. To do that, just type the following command inside your project:

 


au generate

 

The following screen presents available options:

 

sc1

 

We can check whether it’s working by creating sample attribute. All we have to do is type:

 


au generate attribute

 

You’ll be asked to type the name of your attribute. I’m going to call that Test (clever, right?). Almost immediately, you should get information about successful creation. Now, where should we look for the code? Below I listed paths for each generated „thing”:

 

  • attribute: ‚./src/resources/attributes’
  • binding-behaviour: ‚./src/resources/binding-behaviours’
  • element: ‚./src/resources/elements’
  • value-converter: ‚./src/resources/value-converters’
  • generator: ‚./aurelia_project/generators’
  • task: ‚./aurelia_project/tasks’

 

As mentioned I’ve created TestElement, so I’m going to check the elements folder. Here’s the result:

 

sc2

 

Nice! Everything works like a charm. Except, that’s not what I was looking for. None of this „things” satisfies me since the usually created files are view-model with its view. What is more, these files can be located in different folders each time because most of us try to group them depending on application’s functionalities. So, what’s the solution? As you’ve probably noticed, Aurelia’s CLI generator delivers an option to create… generator! That’s awesome!

 

Creating view-model generator

Let’s start by typing the following command:

 


au generate generator

 

I’m going to call it view-model, then I redirect to aurelia_project/generators and look for the view-model.ts file. Here’s the default implementation:

 


import {autoinject} from 'aurelia-dependency-injection';
import {Project, ProjectItem, CLIOptions, UI} from 'aurelia-cli';

@autoinject()
export default class TestGenerator {
  constructor(private project: Project, private options: CLIOptions, private ui: UI) { }

  execute() {
    return this.ui
      .ensureAnswer(this.options.args[0], 'What would you like to call the new item?')
      .then(name => {
        let fileName = this.project.makeFileName(name);
        let className = this.project.makeClassName(name);

        this.project.elements.add(
          ProjectItem.text(`${fileName}.js`, this.generateSource(className))
        );

        return this.project.commitChanges()
          .then(() => this.ui.log(`Created ${fileName}.`));
      });
  }

  generateSource(className) {
return `import {bindable} from 'aurelia-framework';

export class ${className} {
  @bindable value;

  valueChanged(newValue, oldValue) {

  }
}

`
  }
}

 

The code might look complicated, but it’s not. The generator has three objects injected using the autoinject decorator (delivered by aurelia-dependency-injection). All magic happens inside the execute method. First, it invokes a method called ensureAnswer on the UI object. That’s going to print the question on the console. The written answer is then passed inside the name object. Next, it makes a proper file nad class name (based on the name object) and adds the new file inside a particular folder (in this case that’s going to be the elements folder). New file contains the code delivered by the generateSource method which returns a string. So, it’s not that hard, right? We don’t need to play with some hard, strange mechanisms. All we need to do is to type the code inside the string, nothing more. As the last step, generator commits all changes using the commitChanges method and prints information about the successful creation. That’s it. So, as I mentioned earlier, we need to modify that code a little bit. First, we need to generate a view model and view code, but that’s going to be simple since it’s just a string. Our second objective is to develop an ability to point location in which the file should be placed. The code below shows the implementation. It’s not the best code I’ve done but it works (at least I think it does):

 


import {autoinject} from 'aurelia-dependency-injection';
import {Project, ProjectItem, CLIOptions, UI} from 'aurelia-cli';

@autoinject()
export default class ViewModelGenerator 
{
  constructor(private project: Project, private options: CLIOptions, private ui: UI) { }

  execute() 
  {
    var self = this;

    return self.ui
      .ensureAnswer(self.options.args[0], 'What would you like to call the new view model?')
      .then(name => {
        
          let fileName = self.project.makeFileName(name);
          let className = self.project.makeClassName(name);

          return self.ui.ensureAnswer(self.options.args[1], 'Where would you like to create the new view model (this is root level)?')
          .then(path => {

                self.project.locations.push(self.project.requestedPath = ProjectItem.directory(path));

                self.project.requestedPath.add(
                    ProjectItem.text(`${fileName}.ts`, self.generateViewModelSource(className)),
                    ProjectItem.text(`${fileName}.html`, self.generateViewSource())
                );

                return self.project.commitChanges()
                    .then(() => self.ui.log(`Created ${fileName}.`));
          });                
      });
  }

  generateViewModelSource(className) 
  {
return `export class ${className}ViewModel
{
    message = 'Hello from ${className}ViewModel !';

    constructor()
    {
        
    }
}
`
  }

  generateViewSource() 
  {
      return '<template>${message}</template>'
  }
}

 

There’re two major changes here. Firstly, after making a file/class names I invoked the ensureAnswer method again. That’s because we need to ask about the path to the file (starting from the root level of the project). Then comes this strange line:

 


self.project.locations.push(self.project.requestedPath = ProjectItem.directory(path));


 

Since there’s no such thing as the aurelia-cli.d.ts, it was kinda hard to find out how to add a custom path to the project object. That’ why I dived into the js file and I discovered that it contains the locations array. So all I needed to do was to add the new one and use that to add the new files. The rest of the code is almost identical to the default, except it also adds the HTML file (generated below). Before checking whether it’s working, we need to add one more JSON file containing the description of the generator. It’s important that the name of JSON file must match the generator’s name. Here’s the code:

 


{
  "name": "view-model",
  "description": "Creates a view model class with view and places them in the chosen path inside the project."
}

 

Back, to the console we can run generate command again:

 

sc3

 

It looks like, it detected new generator. Let’s create a new view model using the command below:

 


au generate view-model

 

I’m going to call that Generated which is going to be located under the src/generated-view-models path:

 

sc4

 

Looks fine! Not only it generated the code but also created a generated-view-models directory. Nice!

 

Summary

I hope that today’s post will help you to create your own, great code generators (based on your needs). How about the service generator? Or maybe go a little bit further? Generating new area with generated routing also sounds great. As always I encourage you to follow me on Twitter and Facebook where you can find all my new posts!

 

 

Artykuł Generating view model and view using Aurelia CLI pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/generating-view-model-and-view-using-aurelia-cli/feed/ 1
Meet Aurelia & CLI http://foreverframe.pl/meet-aurelia-cli/ http://foreverframe.pl/meet-aurelia-cli/#comments Sun, 23 Oct 2016 18:32:08 +0000 http://foreverframe.pl/?p=1315 As we all know, JavaScript world is one of the fastest growing. New frameworks appear and soon after die because of capabilities or outdated solutions. The example here is knockout.js (which I really loved...

Artykuł Meet Aurelia & CLI pochodzi z serwisu Forever F[r]ame.

]]>
As we all know, JavaScript world is one of the fastest growing. New frameworks appear and soon after die because of capabilities or outdated solutions. The example here is knockout.js (which I really loved by the way) – an excellent MVVM framework with dependency injection, two-way data binding e.t.c which was eaten by Google’s child called Angular. In the meantime React JS happened and all these great frameworks like Ember or Polymer. In early 2016, all frontend geeks waited for the new awesome framework that would replace the old ones – Angular 2. I was one of these geeks, fascinated about new Angular which turned out to be entirely different from what I’d known before. I was disappointed. But soon after I read some article that Rob Eisenberg (Angular 2 team former) quit after almost a year and decided to create his interpretation of next generation JavaScript framework – Aurelia. I checked the project’s page, and 5 minutes later I fall in love completely. It was awesome!

 

What is Aurelia?

Before we introduce the CLI, it’s worthwhile to write a few words about the framework itself. In a nutshell, Aurelia is a modern JavaScript framework which supports:

 

  • Two-Way data binding
  • Routing and UI composition
  • Dependency injection
  • Both TypeScript and ES2016
  • Reusable components
  • MV* architecture
  • Unit testing

 

But what I love the most about that framework is simplicity. I mean in some scenarios you can write pure TS/ES2016 code to implement your app’s functionalities. In more complex cases when you need to use dependency injection, interceptors or something else, it’s still super easy to learn that and understand. Now, I guess that some of you might think that I’m going to present some basics of Aurelia, but I won’t. That’s because I did it a few months ago (it’s written in Polish) and to be honest; those articles were awful. But besides the fact that I just don’t want to repeat myself on the blog, there’s already an excellent introduction to Aurelia on aurelia.io presented by Rob. So for all of you who are interested, I recommend watching that. If you need more detailed introduction here the Objectivity’s webinar with Rob:

 

 

If you’re still not convinced, check the video below from NDC Oslo 😉 Besides the technology, it’s worth mentioning that Aurelia’s community is impressive right now. Back in February 2016, it was a really hard to find some solutions on StackOverflow or even a good article about that framework. Now, everything looks completely different including GitHub stars (about 8.1k) or number of Aurelia’s contributor’s number.

 

Creating Aurelia project using the CLI

After this longish entry let’s start with the titled CLI. First thing we need to do is to get that from the NPM using the following command:

 

npm install aurelia-cli -g

 

Now, open a Command Prompt (I’ll use ConEmu instead) and type:

 

au new

 

You should be asked about project’s name:

 

cli-start

 

After that you have three options to create your project:

 

cli-1

 

I’ll select the third one just to present what options do we have to configure. Next, we’re asked about the transpiler. Since I want to write my spplication in TypeScript I’m gonna select second option here:

 

cli-2

 

Our next step is to choose the proper CSS processor. To be honest, CSS is not my forte, even though I did some code in SASS. That’s why I’m choosing the third option here:

 

 

Now comes the „fun” part. The unit testing configuration in the app’s project using Jasmine and Karma. Alternatively, we can treat our code as state of the art without any bugs (the description made me laugh). Therefore I’m selecting first options…

 

cli-4

 

Last but not least, we have to select a code editor for coding the Aurelia:

 

cli-5

 

I’ll choose the VSC for that purpose. In the last step, we should observe the summary with all selected options. All we need to do is to confirm our creation and install project dependencies! Now if you’d like to check whether it’s working, go to project’s root folder and type the following command:

 

au run

 

After few seconds you should be able to see the „Hello world!” text served on localhost:9000. What’s more adding a –watch flag to your „run” command will run your app with BrowserSync which refresh your application every time you save changes in the editor. So, that’s it, super easy, right?

 

Combining Aurelia & ASP.NET Core

Every Webb application needs the backend. As you probably guessed (based on my previous posts) I’m also a .NET developer. That’s why in this section I’m going to present the way to combine Aurelia with new ASP.NET Core. First, you need to create new ASP.NET Core application (I used standard VS wizard but you can choose Yeoman if it suits you more). After creation, navigate to Web project’s folder (project_name/src/project_name) using command prompt and type the following spell:

 

au new --here

 

You’ll be asked about platform targeting. Select ASP.NET Core:

 

cli-6

 

Next steps are identical to those passed in the previous section. The screen below presents the entire solution after the whole process (Web API):

 

cli-7

 

Yes, it’s really that simple!

Summary

So that was a short introduction for those who have never heard of Aurelia. Next post will be also dedicated to the CLI. We’ll use that for creating a code generator for ViewModels/Views and Services :) So, if you don’t want to miss that I encourage you to follow me on Twitter and Facebook where I post every new article!

Artykuł Meet Aurelia & CLI pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/meet-aurelia-cli/feed/ 1
How to guarantee username uniqueness with CQRS/ES? http://foreverframe.pl/how-to-guarantee-username-uniqueness-with-cqrses/ http://foreverframe.pl/how-to-guarantee-username-uniqueness-with-cqrses/#comments Sun, 16 Oct 2016 18:34:25 +0000 http://foreverframe.pl/?p=1307 To be honest, I thought that my previous post would be the last in the CQRS/ES series, but I forgot to discuss one more thing related to that topic. Many developers don’t know how we...

Artykuł How to guarantee username uniqueness with CQRS/ES? pochodzi z serwisu Forever F[r]ame.

]]>
To be honest, I thought that my previous post would be the last in the CQRS/ES series, but I forgot to discuss one more thing related to that topic. Many developers don’t know how we should handle the following scenario in our systems:

 

„During the creation of user’s new account I would like to verify rather a username is unique in the whole database. Should I use Event Store or Read Database for a query? Where should I check that?”

 

Seriously, that question is one of the most popular topics connected with CQRS on StackOverflow. Therefore in this article, we’ll try to find out the best solution for this problem.

 

Solution 1. Use Event Store for validation in a Command Handler

Okay so let’s say that we’d use an Event Store for username validation inside a Command Handler. But we have a problem quite similar to that from the sixth part of the series. As we’ve already established using Event Store for getting many domain objects at the same time is inefficient. Why? Imagine that each of your domain objects consists of 100 events and we have almost 5000 users register in our web application (not so big number of users). Now, to reconstruct each object (make a projection) we need to get these 100 events. That gives us 100 * 5000 = 0,5 million events. It’s ridiculous, no doubt about that. So, we need to find another solution.

 

Solution 2. Use Read database for validation in a Command Handler

Since the Event Store’s data structure is not comfortable for this task maybe, we should use a „Current State (Read)” database inside our Command Handler? The query using Entity Framework would be very easy:

 


var isAlreadyTaken = readDb.Users.Any(u => u.UserName == command.UserName);

 

I agree that it might look like a good solution, but it’s not because of two reasons. First, we should not use Read DB in Write side. For me, it’s kind of CQRS smell since we wanted to separate each side completely. Now with this dependency, our flow is perturbed. But even though, this solution wouldn’t work every time. Remember that because of two independent databases we have to deal with eventual consistency. What would happen in the following scenario?

 

cqrs-diagram

 

Imagine that some folk created an account with „TestUser” username. His command was sent to the command handler; a proper domain object was created, and UserCreatedEvent was saved inside Event Store. The event needs to be transported via Event Bus to the appropriate handler to synchronize Read DB. But let’s say that in a moment when event waits inside the bus for processing another folk created an account with „TestUser” username. How our system would behave? Using the Read DB we would check username uniqueness, and it would pass! That is because in that time Read DB was not synchronized yet! I hope that you’ve understood that edge-case :) So, as you can see that solution does not guarantee us safety every time. We need to think about something different.

 

Solution 3. Client validation

The next solution that I’d like to present is client validation. The idea is simple. We create an HTTP request from the client to the read side to check whether the username is already taken. If it’s not, we send the complete necessary data to the write side as a command. Otherwise, we can inform the user about validation error. That also sounds good since it does not require any weird dependencies within the write side. But it also has drawbacks. It’s only client-side validation, and it can also fail because of eventual consistency. But we can use that as a part of some „big plan.”

 

Solution 4. Client Validation + Saga pattern

Okay, so why should we use a client validation if it might not work every time? Because the probability of failing is quite low and we’ll also secure our app on the server-side. How? So we know that Event Store has a bad structure for that kind of task, that’s why we need to use a Read DB. But we also know that we cannot use Read DB within the write-side. So here’s the solution. We can add a constraint to a Read DB which will require a UserName uniqueness. So, in that case adding a new user with already taken login would end with an exception. Great, but we have another problem! Creating a user on the Read DB means that we’ve already inserted an event to the Event Store! Therefore, we should delete that event because read-side has thrown an exception. But remember that deleting events is not allowed. We just can’t do that since the idea is to store the entire, complete history of domain’s state. What we could do is to create UserCreationCanceledEvent and add some logic inside our domain object which would not allow reconstructing that. That sound reasonable but we need to inform our domain to create a proper event from the write-side. How? That’s where Saga pattern comes into play. Consider the following sequence diagram:

 

sq-diagram

 

I know that diagram might look complicated, but it’s not 😉 What we should understand is the Saga’s responsibility. If user’s creation fails, we inform saga about that event. Next, a corrective command is sent to Command Bus to create mentioned UserCreationCanceledEvent which then is saved into Event Store. That’s it! Now, I know that some of you might think that’s a lot of steps to perform, and that’s true. But remember that we also have a quite good client-side validation which should protect our system in most cases, so after all it’s no big deal.

 

Summary

I hope that this article will help you in future. If you know any other good solution to guarantee username uniqueness feel free to share that in the comment section! Next post will be dedicated to Aurelia framework so if you’re also interested in frontend technologies or you want to get aquired with that follow me on Twitter or Facebook to be up to date with brand new articles!

Artykuł How to guarantee username uniqueness with CQRS/ES? pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/how-to-guarantee-username-uniqueness-with-cqrses/feed/ 8
CQRS/ES #6 Read database and Event Handlers http://foreverframe.pl/cqrses-6-read-database-and-event-handlers/ http://foreverframe.pl/cqrses-6-read-database-and-event-handlers/#comments Sat, 08 Oct 2016 11:57:46 +0000 http://foreverframe.pl/?p=1299 As I announced in the last part, our CQRS/ES journey is almost finished! But before it happens we need to take care of read side of our application. However, before we move forward to...

Artykuł CQRS/ES #6 Read database and Event Handlers pochodzi z serwisu Forever F[r]ame.

]]>
As I announced in the last part, our CQRS/ES journey is almost finished! But before it happens we need to take care of read side of our application. However, before we move forward to the implementation, it’s worthwhile to explain why do we need a read side? After all, we have an excellent data source called Event Store which allows us to reconstruct every domain object in our system. What’s, even more awesome is the fact that we can „time travel” in our domain by not applying all events in our domain objects. It sounds like a perfect solution for every scenario that may happen. Well not exactly. Imagine that your task is to create a dashboard with some statistics. In most cases, that would require getting a lot of specific data from the database. Now, someone would say that’s not a problem since we can add some „WHERE, GROUP BY, HAVING” etc. but here is the thing. Our Event Store does not store the current state of domain. It keeps only events which first need to be applied inside domain objects. That means that we would need to make a projection of each domain object and then filter them in memory. Even with snapshots, that sounds ridiculous. The conclusion is simple. Event sourcing is not a right approach when we need to get a lot of domain objects at the same time. It’s just not efficient. So, how can we deal with such a problem? We need a second database optimized for reading!

 

Creating a read database

Knowing the problem, we can start thing about the structure of read database. We already know that anything similar to Event Store just won’t work. But let me ask you a question. How would you design a common database for our calendar? All we have is a just an event with some cycles. That sounds like a „One to Many” relation, right? So, why don’t we use a typical „Current state” database for reading? The images below show the structure:

 

model

 

Here’s the implementation of that tables and database context using Entity Framework:

 

public class CalendarItemEntity : InternalEntity
{
    public CalendarItemEntity()
    {
        Cycles = new HashSet<CalendarItemCycleEntity>();
    }

    public CalendarItemEntity(Guid id) :this()
    {
        Id = id;
    }

    public string UserId { get; set; }

    public string Name { get; set; }

    public string Description { get; set; }

    public DateTime StartDate { get; set; }

    public DateTime EndDate { get; set; }

    public ICollection<CalendarItemCycleEntity> Cycles { get; set; }
}
public class CalendarItemCycleEntity : InternalEntity
{
    public CalendarItemCycleEntity()
    {
        Id = Guid.NewGuid();
    }
    public Guid CalendarItemId { get; set; }

    [ForeignKey("CalendarItemId")]
    public CalendarItemEntity CalendarItem { get; set; }

    public DateTime StartDate { get; set; }

    public DateTime? EndDate { get; set; }

    public CalendarItemCycleType Type { get; set; }

    public int Interval { get; set; }
}
public class ReadSideContext : DbContext
{
    static ReadSideContext()
    {
        Database.SetInitializer(new DropCreateDatabaseIfModelChanges<ReadSideContext>());
    }

    public ReadSideContext() :base(nameof(ReadSideContext))
    {
        
    }

    public DbSet<CalendarItemEntity> CalendarItems { get; set; }

    public DbSet<CalendarItemCycleEntity> CalendarItemCycles { get; set; }
}

 

If you read the second part of this series, you probably notice that we used that structure to model our domain objects. Why didn’t we create an Event Store first? Because it’s not natural for human reading. It comfortable structure for having all events in one table, but it’s pretty hard to understand the relations inside our domain. The structure above does that really well. Okay, now we are having a two databases. One optimized for storing events and second optimized for reading with clear relations inside. The question is, how do we synchronize them? And that’s where events and event handlers come to play.

 

Event Handlers

Our domain objects produce the series of events which then we save into Event Store. So, why don’t we use those events to inform our read side that some data changed and we should modify it? That seems quite easy since we already have an event bus, but we need one more thing. We need a class which receives proper event and then modifies read database based on event’s type. That is Event Handler’s responsibility. The implementation for sample handler is given below:

 

public interface IEventHandler<in TEvent> where TEvent : class, IEvent
{
    Task HandleAsync(TEvent @event);
}
public class CalendarItemCreatedEventHandler : IEventHandler<CalendarItemCreatedEvent>
{
    ICalendarItemRepository CalendarItemRepository { get; }

    public CalendarItemCreatedEventHandler(ICalendarItemRepository calendarItemRepository)
    {
        CalendarItemRepository = calendarItemRepository;
    }

    public async Task HandleAsync(CalendarItemCreatedEvent @event) =>
        await CalendarItemRepository.AddAsync(new CalendarItemEntity
        {
            Id = @event.AggregateId,
            UserId = @event.UserId,
            Name = @event.Name,
            Description = @event.Description,
            StartDate = @event.StartDate,
            EndDate = @event.EndDate
        });
    
}

 
Let me now explain what happened here. CalendarItemCreatedEventHandler implements a generic interface which parameter of IEvent type. This interface delivers only one asynchronous method called HandleAsync which receives an event and process that. As you probably noticed for inserting a new row into the database, I used a repository pattern which is not mandatory. You can inject EF context directly if it suits you more. Overall event handler looks quite similar to command handler, but there’s a little difference. The event handler doesn’t validate an event before processing it. Why? Because event represents something that already happened in our system. We can’t change that, so we don’t have to validate that. Okay, so I guess that that’s it. That was the last piece in our journey!

 

Why CQRS + ES is great combo?

Before we end this series it worthwhile to explain why CQRS and Event Sourcing are the great complement of each other. The power that stands behind ES is the fact that we track all data that have ever been produced inside our system. We know everything, and we can time travel to the past to know how our domain looked e.g. a few years ago. That’s insane! Moreover collecting this whole data has one more advantage. You’ll never be surprised by a business. Sometimes it happens to me that after the implementation of new functionality my PM come and ask me, „How would it affect our earnings if we decided on this two years ago?” If you keep the only current state of your domain, you can’t create any simulation. With ES it’s super easy and believe me that your PM will love you for that 😉 But ES also has a drawback which is poorly dealing with a lot of data. That’s where CQRS comes to play, right? It’s a pattern which separates read side of application from the write side. We can scale them independently but what’s even more important; we can optimize each side (using technology and data structures) for these two operations. That’s why I would recommend using ES and CQRS together.

 

Summary

Well, I guess that we’re done! We got acquired with the concept of CQRS and ES, and we implemented every piece inside our awesome calendar. Remember that whole project was just an example, one way of dealing with CQRS and ES. There’re a lot of libraries that may help you to implement that way faster than I did (for instance NEventStore). I hope that you enjoyed that series and that it helped you somehow. Like always I encourage you to follow me on a Twitter and Facebook just to be up t date with new, upcoming posts on this blog.

Artykuł CQRS/ES #6 Read database and Event Handlers pochodzi z serwisu Forever F[r]ame.

]]>
http://foreverframe.pl/cqrses-6-read-database-and-event-handlers/feed/ 5