Loading...

Follow OpenTable Tech UK Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid
TL;DR

This is a continuation of the post Testing an API In Memory in ASP.NET Core where I described in detail how to test an ASP.NET Core API end-to-end in an isolated, repeatable fashion. This is a shorter post that discusses an approach to structuring your project that should make it easier to test and develop.

Oh and there are lots of code samples again!

Where we left off

In my previous post I described how to get a truly isolated in-memory instance of a ASP.NET Core API configured in a test harness to perform repeatable tests. We created an InMemoryStartup class (for configuring the site for testing) and an InMemoryApi class (for encapsulating Test Doubles and setting up the API). I did not go into detail about the content of either of the Start up classes which is what I intend to address here.

Much of the complexity from this kind of testing derives from having to maintain two configuration classes and keeping them in sync correctly. It is all too easy for strange bugs to creep in and go unnoticed when differences are not correctly replicated.

The solution to this is actually quite straightforward which is to create shared configuration classes that can be specialised for your in-memory testing situation. My solution involves creating two configuration classes, one for each of the two methods that are implemented by convention in your regular Startup.cs and the two methods of the IStartup interface. These are:

  • WebAppConfigurator - for the Configure method of Startup
  • ServiceCollectionInstallerRunner - for the ConfigureServices method of Startup

Both of these classes will have in-memory versions of themselves implemented as extension classes - we will come on to this later.

WebAppConfigurator

I know, not the best name but it describes what we are doing here and it’s hard to name configuration and boot strapping classes properly. This class will look roughly as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34

public class WebAppConfigurator
{
private IConfiguration Configuration { get; set; }

public WebAppConfigurator(IConfiguration configuration)
{
Configuration = configuration;
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime appLifeTime, IServiceProvider serviceProvider)
{
//This is all just an example of the configuration steps you might need.
ConfigureLogger(app, loggerFactory, serviceProvider);
ConfigureSwagger(app);

ConfigureShutdown(app, appLifeTime);
ConfigurePostStartup(app, appLifeTime);

LocaleConfiguration.ConfigureRequestLocalization(app);
app.UseResponseCompression();
app.UseMvc();
}

protected virtual void ConfigureLogger((IApplicationBuilder app, ILoggerFactory loggerFactory, IServiceProvider serviceProvider)
{
//configure the logger
}

protected virtual void ConfigureSwagger((IApplicationBuilder app)
{
//configure swagger
}
}

This would be called from your startup class as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

public class Startup
{
private IConfiguration Configuration { get; set; }

public Startup(IHostingEnvironment env)
{
BuildConfiguration();
}

public void Configure(IApplicationBuilder applicationBuilder, IHostingEnvironment hostingEnvironment, ILoggerFactory loggerFactory, IApplicationLifetime appLifeTime, IServiceProvider serviceProvider)
{
var configurator = new WebAppConfigurator(Configuration);
configurator.Configure(applicationBuilder, hostingEnvironment, loggerFactory, appLifeTime, serviceProvider);
}

//more to follow...

private void BuildConfiguration()
{
var builder = new ConfigurationBuilder()
.AddEnvironmentVariables();

Configuration = builder.Build();
}
}

Notice that compared to the Startup class in my first post, the Configure method has additional parameters. The number of parameters you define in your Startup.Configure() method is flexible depending on what you require for your configuration. In my particular case I required all of the listed parameters. This caused some difficulties when implementing the IStartup interface whose Configure method only takes an IApplicationBuilder.

The next step is to create an in-memory version of the WebAppConfigurator which derives from the WebAppConfigurator. It will then override implementations of any virtual methods in this class which we use to encapsulate configuration steps.

This would look as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13

public class InMemoryWebAppConfigurator : WebAppConfigurator
{
public InMemoryWebAppConfigurator(IConfiguration configuration) : base(configuration) { }

protected override void ConfigureSwagger(IApplicationBuilder app)
{
}

protected override void ConfigureLogger(IApplicationBuilder app, ILoggerFactory loggerFactory, IServiceProvider serviceProvider)
{
}
}

In this example we don’t really want swagger running in the test harness nor do we want logging (which in our case is writing to Redis) and so the configuration methods do nothing. The configuration steps that are encapsulated with this method are entirely up too you and depends on your particular use case.

The final step is to now call this from our InMemoryStartup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

public class InMemoryStartup : IStartup
{
private IConfiguration Configuration { get; set; }

public InMemoryStartup()
{
BuildConfiguration();
}

public IServiceProvider ConfigureServices(IServiceCollection services)
{
//todo
}

public void Configure(IApplicationBuilder app)
{
var configurator = new InMemoryWebAppConfigurator(Configuration);
var nullLoggerFactory = new NullLoggerFactory();
configurator.Configure(
app,
new HostingEnvironment()
{
ApplicationName = "Test Harness",
EnvironmentName = "Test"
},
nullLoggerFactory,
new ApplicationLifetime(new Logger<ApplicationLifetime>(new NullLoggerFactory())), null);
}

private void BuildConfiguration()
{
var builder = new ConfigurationBuilder()
.AddEnvironmentVariables();
Configuration = builder.Build();
}
}

Notice in the Configure method that I have constructed a HostingEnvironment type and provided some dummy data as well as passing in a NullLoggerFactory and an ApplicationLifetime type. In my case I did not need to pass the IServiceProvider so I passed null, but you can pass it after it is constructed in your ConfigureServices method.

ServiceCollectionInstallerRunner

The next step is to configure all of your dependencies. In ASP.NET WebApi I tended to leverage 3rd party dependency injection containers such as Castle Windsor, but in ASP.NET Core I find the built-in resolver works perfectly well; at least with a few additions.

The first of these I would recommend you use is Scrutor. This provides two extension methods to the IServiceCollection type; Scan and Decorate. Scan allows you to use convention-based registration, meaning you don’t have to register each type individually. Decorate allows for type decoration.

The second addition is actually something that you can easily add yourself by copying the code below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46

public interface IServiceCollectionInstaller
{
void ConfigureService(IServiceCollection services, IConfiguration configuration);
}

public class ServiceCollectionInstallerRunner
{
public void ConfigureServices(IServiceCollection services, IConfiguration configuration)
{
var installers = GetServiceInstallers();

foreach (IServiceCollectionInstaller installer in installers)
{
installer.ConfigureService(services, configuration);
}
}
protected virtual IEnumerable<IServiceCollectionInstaller> GetServiceInstallers()
{
Assembly assembly = Assembly.GetAssembly(typeof(ServiceCollectionInstallerRunner));

IEnumerable<Type> installerTypes = from type in assembly.GetTypes()
where typeof(IServiceCollectionInstaller).IsAssignableFrom(type) && type != typeof(IServiceCollectionInstaller)
select type;

foreach (Type installerType in installerTypes)
{
var constructors = installerType.GetConstructors();
if (constructors.Any(constructor => constructor.GetParameters().Length > 0))
{
throw new NoParameterlessConstructorException(installerType);
}
}

IEnumerable<IServiceCollectionInstaller> installers = installerTypes.Select(type => (IServiceCollectionInstaller) Activator.CreateInstance(type));
return installers;
}

public class NoParameterlessConstructorException : Exception
{
public NoParameterlessConstructorException(Type installerType)
: base($"Constructor containing parameters not allowed in IServiceCollectionInstaller. Unable to create type {installerType.Name}")
{
}
}
}

What this provides is very similar to Castle Windsor’s IWindsorInstaller interface which lets you define type registration classes for different areas of your application.

The way I prefer to use this is to divide my API project into areas such as logging, monitoring and application-specific areas, and then in each folder have an implementation of IServiceCollectionInstaller that will take care of configuring all the types in that folder. This keeps the configuration as close as possible to the parts being configured. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
 
public class MasstransitServiceInstaller : IServiceCollectionInstaller
{
public void ConfigureService(IServiceCollection services, IConfiguration configuration)
{
services.AddSingleton<IBus>(CreateBus);
services.AddSingleton<IEventSender, EventSender>();
services.Decorate<IEventSender, EventSenderFailureHandler>();
}

public IBus CreateBus(IServiceProvider provider)
{
var configuration = provider.GetService<IRabbitMqConfiguration>();

IBusControl busControl = Bus.Factory.CreateUsingRabbitMq(sbc =>
{
var hostAddress = new Uri(configuration.RabbitMqUrl);
var host = sbc.Host(hostAddress, h =>
{
h.Username("guest");
h.Password("guest");
});
sbc.UseJsonSerializer();
sbc.ReceiveEndpoint(host, "HA-OT.api-messages", c =>
{
c.PrefetchCount = 50;
c.SetQueueArgument("ha", 1);
c.SetQueueArgument("tx", 1);
});
});
return busControl;
}
}

In this example we are setting up MassTransit with RabbitMQ along with our own abstractions on top of the IBus and a decorator for that type to handle send failures.

This makes dependency configuration easy to follow and locate, and is quite rational instead of one giant configuration vomiting dependency configuration code over hundreds of lines in your Startup.cs file.

We can then complete our Startup class as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

public class Startup
{
private IConfiguration Configuration { get; set; }

public Startup(IHostingEnvironment env)
{
BuildConfiguration();
}

public void Configure(IApplicationBuilder applicationBuilder, IHostingEnvironment hostingEnvironment, ILoggerFactory loggerFactory, IApplicationLifetime appLifeTime, IServiceProvider serviceProvider)
{
var configurator = new WebAppConfigurator(Configuration);
configurator.Configure(applicationBuilder, hostingEnvironment, loggerFactory, appLifeTime, serviceProvider);
}

public void ConfigureServices(IServiceCollection services)
{
var installerRunner = new ServiceCollectionInstallerRunner();
installerRunner.ConfigureServices(services, Configuration);
}

private void BuildConfiguration()
{
var builder = new ConfigurationBuilder()
.AddEnvironmentVariables();

Configuration = builder.Build();
}
}

The next step is to make it possible to override the services as configured for running normally to work for our in-memory tests. For this we need to create an in-memory version of the ServiceCollectionInstallerRunner and a new interface that defines overrides of the Service Collection Installer. The code is shared below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

public interface IServiceCollectionInstallerOverride : IServiceCollectionInstaller
{
Type InstallerOverridden { get; }
}

public class InMemoryServiceCollectionRunner : ServiceCollectionInstallerRunner
{
private readonly List<IServiceCollectionInstallerOverride> _serviceCollectionInstallerOverrides;

public InMemoryServiceCollectionRunner(List<IServiceCollectionInstallerOverride> serviceCollectionInstallerOverrides)
{
_serviceCollectionInstallerOverrides = serviceCollectionInstallerOverrides;
}

protected override IEnumerable<IServiceCollectionInstaller> GetServiceInstallers()
{
IEnumerable<IServiceCollectionInstaller> defaultList = base.GetServiceInstallers();

var listWithOverrides = new List<IServiceCollectionInstaller>();

foreach (IServiceCollectionInstaller defaultInstaller in defaultList)
{
var installerOverride =
_serviceCollectionInstallerOverrides.SingleOrDefault(override => override.InstallerOverridden == defaultInstaller.GetType());

if (installerOverride != null)
{
listWithOverrides.Add(installerOverride);
}
else
{
listWithOverrides.Add(defaultInstaller);
}
}

return listWithOverrides;
}
}

The way this works is that you create overrides for the service installers that install dependencies that you want to replace with a Test Double. Taking our previous example of the MasstransitServiceInstaller we can create the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

public class MasstransitServiceInstallerOverride : IServiceCollectionInstallerOverride
{
private readonly FakeEventSender _fakeEventSender;

public MasstransitServiceInstallerOverride(FakeEventSender fakeEventSender)
{
_fakeEventSender = fakeEventSender;
}

public void ConfigureService(IServiceCollection services, IConfiguration configuration)
{
services.AddSingleton<IEventSender>(_fakeEventSender);
}

public Type InstallerOverridden => typeof(MasstransitServiceInstaller);
}

This would be used as follows

1
2
3
4
5
6
7

var serviceCollectionInstallerOverrides = new List<IServiceCollectionInstallerOverride>()
{
new MasstransitServiceInstallerOverride(_fakeEventSender),
};

var serviceCollectionRunner = new InMemoryServiceCollectionRunner(serviceCollectionInstallerOverrides);

We create a list of the overrides, and the overrides know which installer they will replace. The InMemoryServiceCollectionRunner then matches up overrides with installers and then runs the override to configure dependencies instead of the normal installer.

This leads us to a version of the InMemoryApi that looks as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

public class InMemoryApi
{
public InMemoryApi()
{
DoubledEventSender = new FakeEventSender();

var serviceCollectionInstallerOverrides = new List<IServiceCollectionInstallerOverride>
{
new MasstransitServiceInstallerOverride(DoubledEventSender)
};

var startUp = new InMemoryStartup(new InMemoryServiceCollectionRunner(serviceCollectionInstallerOverrides));
var webHostBuilder = new WebHostBuilder().UseStartupInstance(startUp);
var server = new TestServer(webHostBuilder);
Client = server.CreateClient();
}

public IEventSender DoubledEventSender {get;private set;}
public HttpClient Client {get; private set; }
}

This might seem like a lot of complication to achieve more or less what we did in the previous post, but this approach is in my opinion much easier to scale to a large/complex project. When you have possibly 100 dependencies to manage and need to change some of them for in-memory testing this technique makes things easier to follow.

I would be the first to admit that this is an opinionated approach to the problem of structuring your ASP.NET Core projects. If this works for your particular situation then I am glad to have helped.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A couple of years ago I was asked to give a talk to programming undergraduates at Kings College, London. I wrote up the session as a blog post and added it to my personal website, where it has received thousands one or two hits since.

Reading it back this week I was pleasantly surprised how relevant and useful it still is, and as we are currently hiring engineers at the start of their career it is worth resharing here.

Getting that first job

Whilst I can still remember the interview for my first job, I have greater insight these days from interviewing developers, and I have personally recruited, managed and mentored a number of junior engineers at the beginning of their careers. The next sections talk about our hiring process at OpenTable and within this framework here’s how you’d get us to give you a job.

The CV

The CV is not as crucial as you might think. Get the basics right – no typos and a neat layout – but don’t cram it full of everything you’ve ever done, just enough to whet our appetite. A single page is usually best (no more than two).

The most important thing for a job as a developer is to show that you love writing code. Nothing conveys this passion better than sites/plugins/projects/online courses that you have worked on, particularly outside of your employment. Even better, provide links to repositories, websites, blogs or Q&A sites to show your work and genuine interest.

Dare I say it, but your exams results do not really matter to us. Passion, backed up with examples of work will grab our attention, with that hard-earned first or 2:1 counting only as a tie-breaker.

The code test

If we like your CV we’ll ring you for a quick chat, and then ask you to complete a test - either at our office or in your own time. The coding test will vary from role-to-role but will need skills that will be required in the job. If you can’t do the test at all then this isn’t the right role for you, but if you can do part of it then give it a go as it will give us things to talk about in the next stage of the interview.

The face-to-face interview

If there’s enough potential in your test we’ll invite you for an interview. It is daunting, but we like to talk with you for two to three hours to make sure you’re right for the role and that the role is right for you.

The first half hour will be a code review of your test in which we’ll get you to explain how you completed the exercise and we’ll pair with you to refactor or modify the test’s functionality. We’ll be impressed at this stage if you’ve looked again at your code before the interview and can confidently justify your programming decisions. Even if you couldn’t do half the things required in the test, you stand a good chance if you’re knowledgeable about the sections you did complete.

For the next 30-60 minutes we’ll conduct a technical interview. You’ll be interviewed by people whose skills overlap with yours and we’re looking for both a general programming understanding and a couple of subjects in which you can speak more deeply. If you don’t think you’re going to be asked about your favourite subjects try and drop them into the conversation. “Do you work with xxx because I’m really interested in that?” will grab our attention and prompt us to ask more.

The next 40-60 minutes will be a “cultural” interview in which we want to get to know you as a person, how you like to develop code and your understanding of the software development lifecycle. Even if you’ve never written code professionally try and convey passion and a genuine interest and you’ll impress us. A sense of humour is always welcome.

Finally we’ll ask you to spend some time with the head of engineering in London. This is hopefully the most relaxed time in the process. If you’re good enough to make us want to meet you you’ll definitely have other companies knocking on your door so we’ll try and convince you that OpenTable is a great place to work and assess whether it is the right place for you. We encourage candidate questions throughout the day but this is the best time to have a genuine chat.

The first year or two in the job

Congratulations, you’ve got your first job. What now? Now, you simply carry on learning (and get paid for it).

You won’t know a fraction of what the job involves, but in software development no one can know everything. Not even search engines know it all and this is where you will spend a lot of your time. Vastly experienced developers still have to google the answers to things, but when you start out you’ll be doing this a great deal – and that’s absolutely fine.

If you have the drive to solve problems and know how to look up answers then you’re in the right career. Never be embarrassed to teach yourself as you go. Trial and error will be your default technique and you’ll probably repeat the same mistakes more than once. Software development is constantly changing but if you’re always learning then you’ll be successful.

If you want to try something new in your job don’t ask permission, just give it a go. Unless it could affect the company’s bottom line, most mistakes are forgivable and you’ll learn your lessons. Just don’t be reckless.

Get active in the developer community. There are hundreds of free and inexpensive meet-ups and conferences. Be talkative with the people you meet – you’ll learn from them and get to hear about projects or jobs that could be perfect for you. Don’t be self-conscious, and ask as many questions as you can. You’ll find this much easier earlier in your career before you’re too embarrassed because you feel you should already know.

Things to consider as you progress

You’ll hopefully love the company you work for, but only stay with them if you’re genuinely still learning. Job hunting is hard and it’s easy to pretend your current job is just fine – especially if you like your colleagues – but be honest with yourself about your situation to keep progressing.

Don’t feel like you have to go into management. I’m a manager now and describe my job as “talking to people” – but if this isn’t for you then find a company that nurtures individual contributors. You can still become very senior in the industry as a Principle Engineer or Architect, with little or no people management required.

Finally do your best to build a good relationship with your manager. This starts with being reliable and prepared, but take it to the next level by understanding what are your manager’s biggest problems and frustrations, and do what you can to solve them. If you struggle to communicate with your manager then identify someone with whom they have a good relationship, analyse why and emulate this. Try to understand the business strategy and identify opportunities and threats.

A proactive, reliable employee who understands their manager will get the interesting projects and rapid career progression.

In summary
  • Start building things straightaway
  • Be passionate in your interview
  • Embrace trial and error, don’t be afraid to make mistakes
  • Get involved in the developer community
  • Don’t stay too long in a job in which you’re not learning
  • Get on the same wavelength as your manager for good, long-term prospects
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

For most people circuit breakers are a concept that belongs in the world of electricity. They are usually a box of fuses under the stairs that prevents the tangle of extension cords from turning into an unexpected open fireplace behind the TV. But the concept of a circuit breaker is something that we can apply to software and software services.

Martin Fowler described a software circuit breaker as follows:

“… a protected function call in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all”

Background

OpenTable runs on many microservices that depend on one another to deliver the OpenTable site. These services can have dependencies on data stores such as MongoDB and Elastic Search as well as services such as RabbitMQ and Redis. Sometimes these dependencies can have performance or availability issues; such as when somebody accidentally drops an index on a collection in Mongo.

This actually happened, your author accidentally dropped a critical index while working with a DB synchronization tool which resulted in downtime. Hence this article.

When this happens calls to dependencies may fail or take a long time to complete which then causes the dependent service to behave in the same way. Other upstream services can also suffer the same problem. This can be described as a cascading failure.

Ideally what we would like to happen at this point is for our service to recognize what is happening, fail gracefully, and stop calling the service that already can’t keep up with the load placed upon it. (As well as alerting us to the problem). This is where a software circuit breaker can help us.

Polly to the rescue

Polly (http://www.thepollyproject.org/) - is a transient fault handling library which includes an implementation of a software circuit breaker (amongst other things which we will come to later). The documentation for Polly is excellent so I won’t try to replicate it here.

How we used Polly

As you would expect we wrapped calls to external services in Polly circuit breakers (using the Advanced Circuit Breaker policy) and configured it with an arbitrary failure threshold which we thought would protect downstream services that are overwhelmed. But then we went one step further…

A circuit breaker has state, we wanted to be able to see what that state was as it provides insight into the state of our service. To achieve this, we named every circuit breaker instance according to what it protected (RabbitMQ, external Services or Mongo collection names). We then made that state accessible on an endpoint of our API. This allows us to use our monitoring tools to alert us when one of the breakers Opens, providing us with early warning of failures and making it easier to pin-point the failure.

We created a CircuitBreakerRegistry, initialized as a singleton instance within our DI container, which lets us store a handle to a CircuitBreakerPolicy with its unique name, or retrieve all stored policies and retrieve by name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class CircuitBreakerRegistry
{
public void StoreCircuitBreaker(CircuitBreakerPolicy policy , string breakerId)
{
...
}

public ConcurrentDictionary<string, CircuitBreakerPolicy> GetAllRegisteredBreakers()
{
...
}

public CircuitBreakerPolicy GetCircuitNamedBreaker(string breakerId)
{
...
}
}

We then created an API controller that allows us to retrieve a list of all registered Circuit Breakers along with their current state. We configured our monitoring software to check this endpoint and alert when an Open breaker is detected.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
public class CircuitBreakerController : ApiController
{
[HttpGet]
public IHttpActionResult ListAllCircuitBreakers()
{
List<CircuitBreakerViewModel> breakers = _circuitBreakerRegistry
.GetAllRegisteredBreakers()
.Select(breaker => new CircuitBreakerViewModel() {BreakerId = breaker.Key, BreakerState = breaker.Value.CircuitState.ToString()})
.ToList();
var viewModel = new CircuitBreakersViewModel() {Breakers = breakers};

return Content(HttpStatusCode.OK,viewModel, new JsonMediaTypeFormatter());
}

[HttpPut]
public IHttpActionResult ToggleBreaker([FromUri]string breakerId, [FromBody]string value)
{
if (value != "Closed" || value != "Isolated")
return BadRequest("Specify either 'Isolated' or 'Closed'");

var breaker = _circuitBreakerRegistry.GetCircuitNamedBreaker(breakerId);
if (breaker == null)
NotFound();

if (value == "Isolated")
breaker.Isolate();
else
breaker.Reset();

return ListAllCircuitBreakers();
}
}

The API also allows us to toggle the state of the breaker between Closed (think automatic) and Isolated (think manual fail-over). This could be useful for several reasons including:

  • to test our fall-back strategy
  • if we knew a dependency was going to fail or in a failing state (but not triggering the breaker)
  • if we knew a dependency had a bad deploy that might take a while to rollback/fix.

You might think that Closed is bad and Open is good. It’s the other way around in circuit breakers as a switch that is open in a circuit breaks the circuit. When it is closed the circuit is complete and electricity/data can flow.

Motivation

I’ve already mentioned that the reason for my interest in Circuit Breakers and this post was through direct experience of downtime I caused. The truth is I have always been interested in Circuit Breaker technology and application resilience in general, but this incident focused time and attention on the problem.

Clearly this is a case of “Locking the stable door after the horse has bolted” but that’s often the way in engineering. However, something like this might happen again and I would rather put measures in place to deal with it ahead of time. You might argue that given an Open circuit breaker can only fail fast or return an empty result, it makes no difference to the user experience as the data they want to view isn’t available either way. This is true to a degree, but a Circuit Breaker isolates only broken aspects of a service allowing the rest to function normally. Failing fast also means that the user experience of other aspects of the site are unaffected which is better than potentially impacting the whole site.

Insurance…

…is something everybody buys and hopes to never use. Protecting your application with Circuit Breakers and other application resilience measures is one of those things that you should really do, but hope to never need. There is of course a cost involved in adding such measures and it is up to you to decide whether you think it is worth it.

Application Resilience

The Circuit Breaker pattern is only one of many Application Resilience Patterns. If you have already looked at the Polly Project Web site you may have seen that it lists several different Application Resilience policies that it offers. Briefly these are:

  • Retry (which is where the Polly name comes from)
  • Circuit Breakers (already covered)
  • Timeout
  • Bulkhead Isolation (think of it as a predictive circuit breaker)
  • Cache
  • Fall-back

These are by no means a definitive list of Application Resilience measures but are a good starting point.

Again, the documentation on the Polly website describe these in detail so I won’t repeat it here other than to say these Application Resilience methods are something that you might want to consider using alongside Circuit Breaker policies. We use Retry inside of a Circuit Breaker (using the Policy Wrap) to overcome transient faults. We apply a Timeout policy when calling some 3rd party client libraries that have no built-in timeout mechanism. For instance, a version of RabbitMQ for .NET will block our application start-up if the configured RabbitMQ instance is not running on the target host. A forced timeout ensures that our application fails and informs us of why.

Other Libraries

Polly is a great library for .NET developers and there are similar libraries are available in other languages. Hysterix (https://github.com/Netflix/Hystrix) is one such tool that you may have already heard of as it was developed by Netflix and is written for Java applications. There is a version of Polly in Javascript called PollyJs (https://github.com/mauricedb/polly-js) that provides retry and there are of course packages that implement Circuit Breakers in Javascript as well.

Final thoughts

If you decide to use any of these existing frameworks, roll your own or simply implement these patterns directly into your application, the key thing to consider is how you want your application to behave during failure. You should consider how it will affect its upstream clients and how they will react to your application in different states as this is not always as obvious as you might expect. Most importantly, consider the business impact of your failure modes on the customer. You should aim to minimize the impact on the user experience and think about how much you want to tell them when you are offering a reduced service.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Many engineers work every day on opentable.com from our offices located in Europe, America, and Asia, pushing changes to production multiple times a day. Usually, this is very hard to achieve, in fact it took years for us to get to this point. I described in a previous article how we dismantled our monolith in favour of a Microsites architecture. Since the publication of that blog post we have been working on something I believe to be quite unique, called OpenComponents.

Another front-end framework?

OpenComponents is a system to facilitate code sharing, reduce dependencies, and easily approach new features and experiments from the back-end to the front-end. To achieve this, it is based on the concept of using services as interfaces - enabling pages to render partial content that is located, executed and deployed independently.

OpenComponents is not another SPA JS framework; it is a set of conventions, patterns and tools to develop and quickly deploy fragments of front-end. In this perspective, it plays nicely with any existing architecture and framework in terms of front-end and back-end. Its purpose is to serve as delivery mechanism for a more modularised end-result in the front-end.

OC is been in production for more than a year at OpenTable and it is fully open-sourced.

Overview

OpenComponents involves two parts:

  • The consumers are web pages that need fragments of HTML for rendering partial contents. Sometimes they need some content during server-side rendering, somethings when executing code in the browser.
  • The components are small units of isomorphic code mainly consisting of HTML, Javascript and CSS. They can optionally contain some logic, allowing a server-side Node.js closure to compose a model that is used to render the view. When rendered they are pieces of HTML, ready to be injected in any web page.

The framework consists of three parts:

  • The cli allows developers to create, develop, test, and publish components.
  • The library is where the components are stored after the publishing. When components depend on static resources (such as images, CSS files, etc.) these are stored, during packaging and publishing, in a publicly-exposed part of the library that serves as a CDN.
  • The registry is a REST API that is used to consume components. It is the entity that handles the traffic between the library and the consumers.

In the following example, you can see how a web page looks like when including both a server-side rendered component (header) and client-side (still) unrendered component (advert):

1
2
3
4
5
6
7
8
9
10
11
12
13
<!DOCTYPE html>
...
<oc-component href="//oc-registry.com/header/1.X.X" data-rendered="true">
<a href="/">
<img src="//cdn.com/oc/header/1.2.3/img/logo.png" />
</a>
</oc-component>
...
<p>page content</p>
<oc-component href="//oc-registry.com/advert/~1.3.5/?type=bottom">
</oc-component>
...
<script src="//oc-registry/oc-client/client.js"></script>
Getting started

The only prerequisite for creating a component is Node.js:

1
2
3
$ npm install -g oc
$ mkdir components && cd components
$ oc init my-component

Components are folders containing the following files:


























FileDescription
package.jsonA common node’s package.json. An “oc” property contains some additional configuration.
view.htmlThe view containing the markup. Currently Handlebars and Jade view engines are supported. It can contain some CSS under the <style> tag and client-side Javascript under the <script> tag.
server.js (optional)If the component has some logic, including consuming services, this is the entity that will produce the view-model to compile the view.
static files (optional)Images, Javascript, and files that will be referenced in the HTML markup.
*Any other files that will be useful for the development such as tests, docs, etc.
Editing, debugging, testing

To start a local test registry using a components’ folder as a library with a watcher:

1
$ oc dev . 3030

To see how the component looks like when consuming it:

1
$ oc preview http://localhost:3030/hello-world

As soon as you make changes on the component, you will be able to refresh this page and see how it looks. This an example for a component that handles some minimal logic:

undefined
1
2
3
4
5
6
// server.js
module.exports.data = function(context, callback){
callback(null, {
name: context.params.name || 'John Doe'
});
};

To test this component, we can curl http://localhost:3030/my-component/?name=Jack.

Publishing to a registry

You will need an online registry connected to a library. A component with the same name and version cannot already exist on that registry.

1
2
3
4
5
# just once we create a link between the current folder and a registry endpoint
$ oc registry add http://my-components-registry.mydomain.com

# then, ship it
$ oc publish my-component/

Now, it should be available at http://my-components-registry.mydomain.com/my-component.

Consuming components

From a consumer’s perspective, a component is an HTML fragment. You can render components just on the client-side, just on the server-side, or use the client-side rendering as failover strategy for when the server-side rendering fails (for example because the registry is not responding quickly or it is down).

You don’t need Node.js to consume components on the server-side. The registry can provide rendered components so that you can consume them using any tech stack.

When published, components are immutable and semantic versioned. The registry allows consumers to get any version of the component: the latest patch, or minor version, etc. For instance, http://registry.com/component serves the latest version, and http://registry.com/component/^1.2.5 serves the most recent major version for v1.

Client-side rendering

To make this happen, a components’ registry has to be publicly available.

1
2
3
4
5
<!DOCTYPE html>
...
<oc-component href="//my-components-registry.mydomain.com/hello-world/1.X.X"></oc-component>
...
<script src="//my-components-registry.mydomain.com/oc-client/client.js" />
Server-side rendering

You can get rendered components via the registry REST API.

1
2
3
4
5
6
7
8
9
10
curl http://my-components-registry.mydomain.com/hello-world

{
"href": "https://my-components-registry.mydomain.com/hello-world",
"version": "1.0.0",
"requestVersion": "",
"html": "<oc-component href=\"https://my-components-registry.mydomain.com/hello-world\" data-hash=\"cad2a9671257d5033d2abfd739b1660993021d02\" data-name=\"hello-world\" data-rendered=\"true\" data-version=\"1.0.13\">Hello John doe!</oc-component>",
"type": "oc-component",
"renderMode": "rendered"
}

Nevertheless, for improving caching and response size, when doing browser rendering, or using the node.js client or any language capable of executing server-side Javascript the request will look more like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
 curl http://my-components-registry.mydomain.com/hello-world/~1.0.0 -H Accept:application/vnd.oc.unrendered+json

{
"href": "https://my-components-registry.mydomain.com/hello-world/~1.0.0",
"name": "hello-world",
"version": "1.0.0",
"requestVersion": "~1.0.0",
"data": {
"name": "John doe"
},
"template": {
"src": "https://s3.amazonaws.com/your-s3-bucket/components/hello-world/1.0.0/template.js",
"type": "handlebars",
"key": "cad2a9671257d5033d2abfd739b1660993021d02"
},
"type": "oc-component",
"renderMode": "unrendered"
}

Making a similar request it is possible to get the compiled view’s url + the view-model as data. This is useful for caching the compiled view (taking advantage of components’ immutability).

Setup a registry

The registry is a Node.js Express app that serves the components. It just needs an S3 account to be used as library.

First, create a dir and install OC:

1
2
3
4
$ mkdir oc-registry && cd oc-registry
$ npm init
$ npm install oc --save
$ touch index.js

This is how index.js will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var oc = require('oc');

var configuration = {
verbosity: 0,
baseUrl: 'https://my-components-registry.mydomain.com/',
port: 3000,
tempDir: './temp/',
refreshInterval: 600,
pollingInterval: 5,
s3: {
key: 'your-s3-key',
secret: 'your-s3-secret',
bucket: 'your-s3-bucket',
region: 'your-s3-region',
path: '//s3.amazonaws.com/your-s3-bucket/',
componentsDir: 'components'
},
env: { name: 'production' }
};

var registry = new oc.Registry(configuration);

registry.start(function(err, app){
if(err){
console.log('Registry not started: ', err);
process.exit(1);
}
});
Conclusions

After more than a year in production, OC is still evolving. These are some of the most powerful features:

  • It enables developers to create and publish components very easily. None of the operations need any infrastructural work as the framework takes care, when packaging, of making each component production-ready.
  • It is framework agnostic. Microsites written in C#, Node and Ruby consume components on the server-side via the API. In the front-end, it is great for delivering neutral pieces of HTML but works well for Angular components and React views too.
  • It enables granular ownership. Many teams can own components and they all are discoverable via the same service.
  • Isomorphism is good for performance. It enables consumers to render things on the server-side when needed (mobile apps, SEO) and defer to the client-side contents that are not required on the first load (third-party widgets, adverts, SPA fragments).
  • Isomorphism is good for robustness. When something is going bad on the server-side (the registry is erroring or slow) it is possible to use client-side rendering as a fail-over mechanism. The Node.js client does this by default.
  • It is a good approach for experimentation. People can work closely to the business to create widgets that are capable of both getting data from back-end services and deliver them via rich UIs. We very often had teams that were able to create and instrument tests created via OC in less then 24 hours.
  • Semver and auto-generated documentation enforce clear contracts. Consumers can pick the version they want and component owners can keep clear what the contract is.
  • A more componentised front-end leads to write more easily destroyable code. As opposite of writing highly maintainable code, this approach promotes small iterations on very small, easily readable and testable units of code. In this perspective, recreating something from scratch is perfectly acceptable and recommended, as there is almost zero cost for a developer to start a new project and the infrastructure in place makes maintainance and deprecation as easy as a couple of clicks.

If you wish to try or know more about OpenComponents, visit OC’s github page or have a look at some component examples. If you would give us some feedback, asks us question, or contribute to the project get in touch via the gitter chat or via e-mail. We would love to hear your thoughts about this project.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At OpenTable it’s becoming an increasingly popular trend to use React.
One of the reasons for this is the ability for it to server-side render whilst still
giving us the client side flexibility that we all crave!

We all know to have stable, reliable software you need to have well written tests. Facebook knows this and
provides the handy Test Utilities library to make
our lives easier.

Cool — I hear you all say! But what is the best approach to testing React components?

Well unfortunately this is something that is not very well documented and if not approached in
the correct way can lead to brittle tests.

Therefore I have written this blog post to discuss the different approaches we have available to us.

All code used in this post is avaliable on my GitHub.

The Basics

To make our lives a lot easier when writing test it’s best to use a couple of basic tools. Below is
the absolute minimum required to start testing React components.

  • Mocha - This is a testing framework that runs in the browser or Node.JS (others are available).
  • ReactTestUtils - This is the basic testing framework that Facebook provides to go testing with React.
The Scenario

We have a landing page broken down into two separate components:

  • Container - The holding container for all sub-components.
  • Menu Bar - Contains the site navigation and is always displayed.

Each React component is self-contained and should be tested in isolation.

For the purpose of this exercise we will focus on the test for the container component and
making sure that the menu bar is displayed within it.

Approach 1 (Full DOM):

I like to call this the “Full DOM” approach because you take a component and render it in its entirety
including all of its children. The React syntax are transformed and any assertion
you make will be against the rendered HTML elements.

Below is our test scenario written in this approach.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react/addons';
...
import jsdom from 'jsdom';

global.document = jsdom.jsdom('<!doctype html><html><body></body></html>');
global.window = document.parentWindow;

describe('Container', function () {
it('Show the menu bar', function () {
let container = TestUtils.renderIntoDocument(<Container />);

let result = TestUtils.scryRenderedDOMComponentsWithClass(container,
'menu-bar-container');

assert.lengthOf(result, 1);
});

If you run the above test it passes but how does it work?

1
2
3
4
import jsdom from 'jsdom';

global.document = jsdom.jsdom('<!doctype html><html><body></body></html>');
global.window = document.parentWindow;

This sets up our DOM which is a requirement of TestUtils.renderIntoDocument.

1
let container = TestUtils.renderIntoDocument(<Container />);

TestUtils.renderIntoDocument then takes the React syntax and renders it into the DOM as HTML.

1
let result = TestUtils.scryRenderedDOMComponentsWithClass(container, 'menu-bar-container');

We now query the DOM for a unique class that is contained within the menu-bar and get an array of
DOM elements back which we can assert against.

The example above is a common approach but is it necessarily the best way?

From my point of view no, as this approach makes our tests brittle. We are exposing and querying on the inner workings
of the menu-bar and if someone was to refactor it and remove/rename the “menu-bar-container” class then our test would fail.

Approach 2 (Shallow Rendering):

With the release of React 0.13 Facebook provided the ability to “shallow render” a component.
This allows you to instantiate a component and get the result of its render function, a ReactElement, without a DOM.
It also only renders the component one level deep so you can keep your tests more focused.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React, { addons } from 'react/addons';
import Container from '../../src/Container';
import MenuBar from '../../src/MenuBar';

describe('Container', function () {
let shallowRenderer = React.addons.TestUtils.createRenderer();

it('Show the menu bar', function () {
shallowRenderer.render(<Container/>);
let result = shallowRenderer.getRenderOutput();

assert.deepEqual(result.props.children, [
<MenuBar />
]);
});
});

Again like the previous example this passes but how does it work?

1
let shallowRenderer = React.addons.TestUtils.createRenderer();

We first create the shallowRender which handles the rendering of the React components.

1
shallowRenderer.render(<Container/>);

Then we pass in the component we have under test to the shallowRender.

1
2
let result = shallowRenderer.getRenderOutput();
assert.deepEqual(result.props.children, [<MenuBar/>]);

And finally we get the output from the shallowRender and
assert that the children contain the menu-bar component.

Is this approach any better than the previous? In my option yes and for the following reasons:

  • We don’t rely on the inner workings of the menu-bar to know if it has been rendered and therefore the markup can be refactored without
    any of the
    tests being broken.

  • Less dependencies are being used as shallowRender does not require
    a DOM to render into.

  • It’s a lot easier to see what is being asserted as we are able to use JSX syntax in assertions.

Conclusion

So is shallow rendering the silver bullet for React testing? Probably not as it still lacking on key feature for me when dealing
with large components and that is the ability to easily query the ReactDOM (libraries like enzyme
are working towards improving this). But it is still a lot better than rendering the component out into HTML and coupling your tests
to the inner components of others.

In this blog post we have just scratched the surface of testing with React and I hope it’s food for thought when writing your next set of
React tests.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Puppet is an important tool to us at OpenTable; we couldn’t operate as efficiently without it but Puppet is more than a tool or a vendor, it is a community of people trying to help
each other operate increasing complex and sophisticated infrastructures.

The Puppet community and the open source efforts that drive that community have always been important to us which is why we want to take a step further in our efforts and introduce
you to the “Puppet-community” project.

What is Puppet-community

Puppet-community is a GitHub organisation of like-minded individuals from across the wider Puppet ecosystem and from a diverse set of companies. Its principle aims are to allow the community to synchronise its efforts and to provide a GitHub organisation and Puppet Forge namespace not affiliated with any company.

Its wider aims are to provide a place for module and tool authors to share their code and the burden of maintaining it.

I would like to say that this was our idea, as it’s an excellent one, but actually all credit goes to its founders: Igor Galić, Daniele Sluijters and Spencer Krum

Why communities matter

So why all the fuss about this? Why does it even matter where your code lives?

Well these are the some questions that I asked myself when I first heard about this project at PuppetConf 2014. The answer is that is really does matter and it’s a pattern that is
developing elsewhere (see: packer-community, terraform-community-modules,
cloudfoundry-community) to deal with the problems you’ll face with a large amount of open source code.

Stepping back slightly, if you look at open source then there are three types: product-based (think open-core), corporate/individual sponsored, and community-driven.

The first is common for businesses (like PuppetLabs) who’s product is a open source product. They make great efforts to build a community, fix bugs and accept changes. They make their money through extras (add-ons and/or professional services). They control what they will/won’t accept and are driven by the need to build that community as well as support those big paying customers who pay the bills - it’s a tough balancing act.

The second is what you probably mean when you think about open source. It’s a individual or company that dumps some code they have been working on to GitHub and that’s it - they own it, they control it, it they don’t like your changes they don’t even have to give a reason. They can also choose to close or delete the project whenever they want or more likely they will just let it sit on GitHub and move onto the next thing.

The third is the community approach. Create a GitHub organisation, move your projects and add some new people in there with commit access. This is a different approach because it means
that you don’t own it any more, you don’t have that tight control over the codebase because there are other people with other opinions that you have to take into account. It also means
that on long weeks when you’re on-call or on holiday that there is someone else to pick up the slack and merge those pull requests for you. It has massive benefits if you can keep that
ego in check.

Why we’re moving our modules there

So why is OpenTable moving its modules there? It is because we care about the community (particularly those using Puppet on Windows) and want to make sure there is good long term
support for the modules that we authored. OpenTable isn’t a company that authors Puppet modules, it is a company that seats diners in restaurants so from time to time we are going
to work on other things.

By being part of the community there will be other people who can help discuss and diagnose bugs, merge pull requests and generally help with any problems that arise when using
the modules we created.

Sometimes when writing a module it’s not about being the best, sometimes it’s just about being first - we got a bit lucky. What that means though is that we need to recognise that there
are plenty of people out there in the community that have better knowledge than us about a tool or application and might be better suited to guide the project forward - heck we might
even learn from them in the process.

So let’s lose our egos, loosen that grip and let those modules be free …

What that means for you

Ok, so let’s get practical for a second. What’s happening here? What our support of Puppet-community means is that our code has moved into a new organisation
(github.com/puppet-community) and our modules have been re-released under the community namespace on the forge
(forge.puppetlabs.com/puppet). So if you are using our modules then you should go and have a look on the forge and update to the latest versions.
We will continue to provide lots of support to these modules but so will lots of others (including some PuppetLabs employees) so expect the quality of the modules to also start increasing.

If you have any thoughts or questions about this you can reach out to me personally on twitter: @liamjbennett or via email at: liamjbennett@gmail.com

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When we first stood up our hapi.js APIs, we wrote init scripts to start/stop them. Stopping the server, was simply a case of sending SIGKILL (causing the app to immediately exit).

Whilst this is fine for most cases, if we want our apps to be good Linux citizens, then they should terminate gracefully. Hapi.js has the handy server.stop(...) command (see docs here) which will terminate the server gracefully. It will cause the server to respond to new connections with a 503 (server unavailable), and wait for existing connections to terminate (up to some specified timeout), before stopping the server and allowing the node.js process to exit. Perfect.

This makes our graceful shutdown code really simple:

1
2
3
4
5
process.on('SIGTERM', function(){
server.stop({ timeout: 5 * 1000}, function(){
process.exit(0);
});
});

When we see a SIGTERM, call server.stop(), then once the server has stopped, call process.exit(0). Easy peasy.

Throw a spanner in the works

Whilst server.stop() is really useful, it has the problem that it immediately prevents the server from responding to new requests. In our case, that isn’t particularly desirable. We use service-discovery, which means that the graceful termination of our app should run like this:

  • SIGTERM
  • Unannounce from Service-Discovery
  • server.stop(...)
  • process.exit(0)

Ideally we want the unannounce to happen before the server starts rejecting connections, in order to reduce the likelihood that clients will hit a server that is shutting down.

Plugins to the rescue!

Thanks to hapi.js’s awesome plugin interface (shameless self promotion), we can do some magic to make the above possible.

I created a really simple plugin called hapi-shutdown which will handle SIGTERM and then run triggers before calling server.stop(...).

The idea is that it allows us to run the ‘unannounce’ step, before server.stop(...) is called.

How to use hapi-shutdown
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
server.register([
{
plugin: require('hapi-shutdown'),
options: {
serverSpindownTime: 5000 // the timeout passed to server.stop(...)
}
}],
function(err){
server.start(function(){

server.plugins['hapi-shutdown'].register({
taskname: 'do stuff',
task: function(done){
console.log('doing stuff before server.stop is called');
done();
},
timeout: 2000 // time to wait before forcibly returning
})
});
});

The plugin exposes a .register() function which allows you to register your shutdown tasks. The tasks are named (to prevent multiple registrations), and each task must call the done() function. The timeout parameter is provided so that a task which never completes won’t block the shutdown of the server.

Neat, huh?

Hooking up unannounce using hapi-shutdown

We now have a place to register our ‘unannounce’ task. Our service-discovery code is wrapped in another plugin, which means we can use server.dependency(...).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// inside the plugin's register function

server.dependency('hapi-shutdown', function(_, cb){
var err = server.plugins['hapi-shutdown'].register({
taskname: 'discovery-unannounce',
task: function(done){
discovery.unannounce(function(){
done();
});
},
timeout: 10 * 1000
});

cb(err);
});

server.dependency(...) allows us to specify that this plugin relies on another plugin (or list of plugins). If the dependent plugin is not registered before the server starts, then an exception is thrown.

Handily, server.dependency(...) also takes a callback function, which is invoked after all the dependencies have been registered, which means that you don’t need to worry about ordering inside your server.register(...) code.

This allows our unannounce code to be decoupled from the actual business of shutting down the server.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A couple of years ago we started to break-up the code-base behind our consumer site opentable.com, to smaller units of code, in order to improve our productivity. New teams were created with the goal of splitting up the logic that was powering the back-end and then bring to life new small services. Then, we started working on what we call Microsites.

Microsites

A microsite is a very small set of web-pages, or even a single one, that takes care of handling a very specific part of the system’s domain logic. Examples are the Search Results page or the Restaurant’s Profile page. Every microsite is an independently deployable unit of code, so it is easier to test, to deploy, and in consequence more resilient. Microsites are then all connected by a front-door service that handles the routing.

Not a free ride

When we deployed some microsites to production we immediately discovered a lot of pros:

  • Bi-weekly deployments of the monolith became hundreds of deployments every week.
  • Not anymore a shared codebase for hundreds of engineers. Pull requests accepted, merged, and often deployed on the same day.
  • Teams experimenting and reiterating faster: product was happy.
  • Diversity on tech stacks: teams were finally able to pick their own favourite web-stack, as soon as they were capable of deploying their code and taking care of it in terms of reliability and performance.
  • Robustness: when something was wrong with a microsite, everything else was fine.

On the other hand, we soon realised that we introduced new problems on the system:

  • Duplication: teams started duplicating a lot of code, specifically front-end components such as the header, the footer, etc.
  • Coordination: when we needed to change something on the header, for example, we were expecting to see the change live in different time frames, resulting in inconsistencies.
  • Performance: every microsite was hosting its own duplicated css, javascript libraries, and static resources; resulting as a big disadvantage for the end-user in terms of performance.
SRS - aka Site Resources Service

To solve some of these problems we created a REST api to serve html snippets, that soon we started to call components. Main characteristics of the system are:

  • We have components for shared parts of the website such as the header, the footer, and the adverts. When a change has to go live, we apply the change, we deploy, and we see the change live everywhere.
  • Output is in HTML format, so the integration is possible if the microsite is either a .NET MVC site or a node.js app.
  • We have components for the core CSS and the JS common libraries, so that all the microsites use the same resources and the browser can cache them making the navigation smooth.
  • The service takes care of hosting all the static resources in a separate CDN, so microsites don’t have to host that resources.

This is an example of a request to the core css component:

1
2
3
4
5
6
7
curl http://srs-sc.otenv.com/v1/com-2014/resource-includes/css

{
"href": "http://srs-sc.otenv.com/v1/com-2014/resource-includes/css",
"html": "<link rel=\"stylesheet\" href=\"//na-srs.opentable.com/content/static-1.0.1388.0/css-new-min/app.css\" /><!--[if lte IE 8]><link rel=\"stylesheet\" href=\"//na-srs.opentable.com/content/static-1.0.1388.0/css-new-min/app_ie8.css\" /> <![endif]-->",
"type":"css"
}

The downside of this approach is that there is a strict dependency with SRS for each microsite. On every request, a call to SRS has to be made, so we had to work hard to guarantee reliability and good performance.

Conclusions

When we tried the microsite approach we “traded” some of our code problems with some new cultural problems. We became more agile and we were working in a new different way, with the downside of having the need to more effectively coordinate more people. The consequence is that the way we were approaching the code evolved over time.

One year later, with the front-end (almost completely) living on micro-sites, and with the help of SRS, we are experimenting more effective ways to be resilient and robust, with the specific goal to allow teams to create their own components and share them with other teams in order to be independent, and use them to easily approach to A/B experiments.

In the next post I’ll write about OpenComponents, an experimental framework we just open-sourced that is trying to address some of this needs.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
TL;DR

This is long post that describes how to setup a in-memory test harness for testing an entire ASP.NET Core API with lots of code examples.

Testing and Brocoli

Unit testing, like brocoli, is something we all know we should do, but actually we just want to tuck into the meat and potatoes of production code. One reason you might not bother with unit tests is you are writing an API which is mostly a CRUD layer over a database (where most of the code is right-to-left copying from a DTO to an API response), or you are writing framework plugins for which you just can’t write meaningful tests.

I feel your pain, I’ve been there. What I prefer to do is test at the level of http all the way through the API to the database. But wait, you may cry, that’s not a unit test! Well yes you have a point there, I’m touching lots of different classes when I test like this. I would argue however that the word “unit” is a bit ambiguous; my definition of unit is ‘not a class’.

I prefer to classify tests as either Isolated, Integration, or End-to-End. When I say isolated, what I mean is that the test can be run without worrying about external dependencies, whether they are your databases or an external API or other system. This means you can test anything from a single class up to the whole API, reliably and repeatably. Integration tests then focus on your integrations with your database’s other APIs, and End-to-End tests end up being more akin to smoke tests or sanity tests.

Given these definitions let’s look at how you can test your API end-to-end in an isolated fashion. Or put another way, let’s cook that brocoli in butter and chilli and season with salt, pepper and a little grated parmesan (you’re welcome!)

In order to test an ASP.NET Core application we need to be able to spin up our whole application in a test harness (i.e. your unit test/test fixture). Microsoft provide a NuGet package that lets us do just that; Microsoft.AspNetCore.TestHost contains a class called TestServer. Once you create an instance of the TestServer you can get an instance of a HttpClient with which you can then make http calls to your API.

Creating a TestServer instance

To create an instance of the TestServer you need to supply its constructor with an instance of a WebHostBuilder. The easiest way to create a WebHostBuilder is in the same manner as you would when you are building your IWebHost implementation to run your site with Kestrel.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

public class Program
{
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}

private static IWebHost BuildWebHost(string[] args)
{
return new WebHostBuilder().UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls("http://*:" + ApplicationInfoProvider.GetPort())
.Build();
}
}

So in a test harness you would write

1
2
3
4

var webHostBuilder = new WebHostBuilder().UseStartUp<Startup>());
var server = new TestServer(webHostBuilder);
var httpClient = server.CreateClient();

The problem with this is if you just use your Startup class that you use for normally spinning up your site, you end up with the exact same configuration and therefore the same dependencies. You may think to create a new version of Startup (InMemoryStartup for instance) and reference that; the problem then is that you don’t have an easy way to manipulate the state of your API from your test harness, unless you want to use static properties. That might work in simple cases, but with complex systems managing that static state might become problematic - and who likes statics anyway.

What we really want to do is pass on instance of a Startup class (designed for in-memory testing) to the WebHostBuilder constructor, but there is no native support for this. To get around this use the following extension method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

public static class WebHostBuilderExtensions
{
public static IWebHostBuilder UseStartupInstance(this IWebHostBuilder hostBuilder, IStartup startup)
{
string name = startup.GetType().GetTypeInfo().Assembly.GetName().Name;
return hostBuilder.UseSetting(WebHostDefaults.ApplicationKey, name).ConfigureServices((services =>
{
if (typeof(IStartup).GetTypeInfo().IsAssignableFrom(startup.GetType().GetTypeInfo()))
services.AddSingleton(startup);
else
services.AddSingleton(typeof(IStartup), serviceProvider =>
{
IHostingEnvironment requiredService = serviceProvider.GetRequiredService<IHostingEnvironment>();
return new ConventionBasedStartup(StartupLoader.LoadMethods(serviceProvider, startup.GetType(), requiredService.EnvironmentName));
});
}));
}
}

As an aside, this post https://www.stevejgordon.co.uk/aspnet-core-anatomy-how-does-usestartup-work is a very in depth look at how Startup classes work in Asp.net.

You might have noticed the IStartUp type in the UseStartupInstance above and be thinking, wait, my Startup class doesn’t implement that, what’s going on? Well, it’s basically some convention-based magic which is explained and examined in the blog post I reference above. It’s the kind of thing that personally irritates me as it makes things less discoverable for the sake of not writing " : IStartup" for each project (and I’m sure there are other reasons).

We can now inject a Startup class as shown below.

1
2
3
4

var webHostBuilder = new WebHostBuilder().UseStartupInstance(new Startup());
var server = new TestServer(webHostBuilder);
var httpClient = server.CreateClient();

But now this won’t work because your normal Startup class does not implement IStartup, therefore we need to create an InMemoryStartup class that does implement this, and which we can customize for testing purposes. This would look something like the following.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

public class InMemoryStartup : IStartup
{
public InMemoryStartup()
{
}

public IServiceProvider ConfigureServices(IServiceCollection services)
{
// Some code to configure services
}

public void Configure(IApplicationBuilder app)
{
// Some code to configure application
}
}

You would then construct your test harness as follows (if you are using NUnit)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

[TextFixture]
public class SomeInMemoryTests
{
private HttpClient _httpClient;

[Setup]
public void StartupInMemoryApi()
{
var webHostBuilder = new WebHostBuilder().UseStartupInstance(new InMemoryStartup());
var server = new TestServer(webHostBuilder);
_httpClient = server.CreateClient();
}

[Test]
public async Task TestGettingSomeData()
{
var response = await _httpClient.GetAsync("/some-data/123");
var data = JsonConvert.DeserializeObject<Data>(await response.Content.ReadAsStringAsync());
Assert.That(data.Id, Is.EqualTo(123));
}

[Test]
public async Task TestPostingSomeData()
{
//etc...
}
}

As you can see this provides a simple method of testing an API inside your test harness. The HttpClient provides convenience methods for GET, POST, PUT PATCH and DELETE and also a general SendAsync method that allows for fine grained control.

What I’ve described here isn’t much more interesting or detailed than what is already described in similar posts. And like all of them so far we have skipped over an important detail; what about databases or any other external service on which I depend? If I simply replicate my application startup logic, I’m still pointing at a real database. This will prevent the tests from being repeatable and reliable as the external database is outside the control of the test harness.

To get around this we will need to design our InMemoryStartup class to inject Test Doubles into the application in place of your abstractions around your external dependencies (and I assume you are doing this right).

The place to do this is in the ConfigureServices method where we are setting up our dependency injection container with our dependencies. We would also want to be able to pass in our TestDoubles from our test harness so that the test can control these doubles and also read data from them. The InMemoryStartup class would end up looking something like this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

public class InMemoryStartup : IStartup
{
private IDatabase _databaseTestDouble;
private IClientApi _apiTestDouble;
private ILog _logTestDouble;

public InMemoryStartup(IDatabase databaseTestDouble, IClientApi apiTestDouble, ILog logTestDouble)
{
_databaseTestDouble = databaseTestDouble;
_apiTestDouble = apiTestDouble;
_logTestDouble = logTestDouble;
}

public IServiceProvider ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IDatabase>(_databaseTestDouble);
services.AddSingleton<IClientApi>(_apiTestDouble);
services.AddSingleton<ILog>(_logTestDouble);

//Add other dependencies that are not external
}

public void Configure(IApplicationBuilder app)
{
// Some code to configure application
}
}

Given this we can now modify our tests as follows…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

[TextFixture]
public class SomeInMemoryTests
{
private HttpClient _httpClient;
private Data _testData;

[Setup]
public void StatupInMemoryApi()
{
_testData = Data() {Id = 123, Data = "some data"};
var databaseDouble = new FakeDatabase() { Data = new List<Data>(){testData}};
var startUp = new InMemoryStartup(databaseDouble, null, null);
var webHostBuilder = new WebHostBuilder().UseStartupInstance(startUp);
var server = new TestServer(webHostBuilder);
_httpClient = server.CreateClient();
}

[Test]
public async Task TestGettingSomeData()
{
var response = await _httpClient.GetAsync("/some-data/123");
var data = JsonConvert.DeserializeObject<Data>(await response.Content.ReadAsStringAsync());
Assert.That(data.Id, Is.EqualTo(_testData.Id));
Assert.That(data.Data, Is.EqualTo(_testData.Data));
}
}

In this particular example I have only supplied a test double for the database, in other tests you would want to provide doubles for the other dependencies as well. In fact I would suggest that you should set-up all of your dependencies for every test case and do so consistently for all tests. I would do this for the simple reason that you may suffer side effects when the different tests run in different setups of your API.

Given that it is advisable to set-up all of your dependencies for every tests it becomes obvious that you should encapsulate the setup of your dependencies in a class that you might call InMemoryApi. This could look something like this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

public class InMemoryApi
{
public InMemoryApi()
{
DatabaseTestDouble = new FakeDatabase();
ClientApiTestDouble = new FakeClientApi();
LoggerTestDouble = new FakeLogger();

var startUp = new InMemoryStartup(DatabaseTestDouble, ClientApiTestDouble, LoggerTestDouble);
var webHostBuilder = new WebHostBuilder().UseStartupInstance(startUp);
var server = new TestServer(webHostBuilder);
Client = server.CreateClient();
}

public FakeDatabase DatabaseTestDouble { get; private set; }
public FakeClientApi ClientApiTestDouble { get; private set; }
public FakeLogger LoggerTestDouble { get; private set; }
public HttpClient Client {get; private set; }
}

Which then simplifies your test harness to look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

[TextFixture]
public class SomeInMemoryTests
{
private InMemoryApi _api;
private Data _testData;

[Setup]
public void StartupInMemoryApi()
{
_testData = Data() {Id = 123, Data = "some data"};
_api = new InMemoryApi();
_api.FakeDatabase.AddTestData(_testData); //a method on your test double
}

[Test]
public async Task TestGettingSomeData()
{
var response = await _api.Client.GetAsync("/some-data/123");
var data = JsonConvert.DeserializeObject<Data>(await response.Content.ReadAsStringAsync());
Assert.That(data.Id, Is.EqualTo(_testData.Id));
Assert.That(data.Data, Is.EqualTo(_testData.Data));
}
}

You can now easily write further tests, reusing this InMemoryApi type with very little additional effort. All of your usual techniques for creating a common test context can be used to further reduce duplicate setup code. You can also use libraries such as NMock or NSubstitute for creating your test doubles instead of hand-rolling them as is my preference.

You may at this point be thinking there is a risk of inconsistency between the API when it is running against your Startup class and your InMemoryStartup, as these both need to be very similar but also different enough to allow for testing. There are approaches to reduce this risk to a minimum which I plan to cover in a follow-up post as this one has already gone on for long enough.

I hope this was useful for you and had enough depth to get you started with this style of in memory API testing.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A couple of years ago I was asked to give a talk to programming undergraduates at Kings College, London. I wrote up the session as a blog post and added it to my personal website, where it has received thousands one or two hits since.

Reading it back this week I was pleasantly surprised how relevant and useful it still is, and as we are currently hiring engineers at the start of their career it is worth resharing here.

Getting that first job

Whilst I can still remember the interview for my first job, I have greater insight these days from interviewing developers, and I have personally recruited, managed and mentored a number of junior engineers at the beginning of their careers. The next sections talk about our hiring process at OpenTable and within this framework here’s how you’d get us to give you a job.

The CV

The CV is not as crucial as you might think. Get the basics right – no typos and a neat layout – but don’t cram it full of everything you’ve ever done, just enough to whet our appetite. A single page is usually best (no more than two).

The most important thing for a job as a developer is to show that you love writing code. Nothing conveys this passion better than sites/plugins/projects/online courses that you have worked on, particularly outside of your employment. Even better, provide links to repositories, websites, blogs or Q&A sites to show your work and genuine interest.

Dare I say it, but your exams results do not really matter to us. Passion, backed up with examples of work will grab our attention, with that hard-earned first or 2:1 counting only as a tie-breaker.

The code test

If we like your CV we’ll ring you for a quick chat, and then ask you to complete a test - either at our office or in your own time. The coding test will vary from role-to-role but will need skills that will be required in the job. If you can’t do the test at all then this isn’t the right role for you, but if you can do part of it then give it a go as it will give us things to talk about in the next stage of the interview.

The face-to-face interview

If there’s enough potential in your test we’ll invite you for an interview. It is daunting, but we like to talk with you for two to three hours to make sure you’re right for the role and that the role is right for you.

The first half hour will be a code review of your test in which we’ll get you to explain how you completed the exercise and we’ll pair with you to refactor or modify the test’s functionality. We’ll be impressed at this stage if you’ve looked again at your code before the interview and can confidently justify your programming decisions. Even if you couldn’t do half the things required in the test, you stand a good chance if you’re knowledgeable about the sections you did complete.

For the next 30-60 minutes we’ll conduct a technical interview. You’ll be interviewed by people whose skills overlap with yours and we’re looking for both a general programming understanding and a couple of subjects in which you can speak more deeply. If you don’t think you’re going to be asked about your favourite subjects try and drop them into the conversation. “Do you work with xxx because I’m really interested in that?” will grab our attention and prompt us to ask more.

The next 40-60 minutes will be a “cultural” interview in which we want to get to know you as a person, how you like to develop code and your understanding of the software development lifecycle. Even if you’ve never written code professionally try and convey passion and a genuine interest and you’ll impress us. A sense of humour is always welcome.

Finally we’ll ask you to spend some time with the head of engineering in London. This is hopefully the most relaxed time in the process. If you’re good enough to make us want to meet you you’ll definitely have other companies knocking on your door so we’ll try and convince you that OpenTable is a great place to work and assess whether it is the right place for you. We encourage candidate questions throughout the day but this is the best time to have a genuine chat.

The first year or two in the job

Congratulations, you’ve got your first job. What now? Now, you simply carry on learning (and get paid for it).

You won’t know a fraction of what the job involves, but in software development no one can know everything. Not even search engines know it all and this is where you will spend a lot of your time. Vastly experienced developers still have to google the answers to things, but when you start out you’ll be doing this a great deal – and that’s absolutely fine.

If you have the drive to solve problems and know how to look up answers then you’re in the right career. Never be embarrassed to teach yourself as you go. Trial and error will be your default technique and you’ll probably repeat the same mistakes more than once. Software development is constantly changing but if you’re always learning then you’ll be successful.

If you want to try something new in your job don’t ask permission, just give it a go. Unless it could affect the company’s bottom line, most mistakes are forgivable and you’ll learn your lessons. Just don’t be reckless.

Get active in the developer community. There are hundreds of free and inexpensive meet-ups and conferences. Be talkative with the people you meet – you’ll learn from them and get to hear about projects or jobs that could be perfect for you. Don’t be self-conscious, and ask as many questions as you can. You’ll find this much easier earlier in your career before you’re too embarrassed because you feel you should already know.

Things to consider as you progress

You’ll hopefully love the company you work for, but only stay with them if you’re genuinely still learning. Job hunting is hard and it’s easy to pretend your current job is just fine – especially if you like your colleagues – but be honest with yourself about your situation to keep progressing.

Don’t feel like you have to go into management. I’m a manager now and describe my job as “talking to people” – but if this isn’t for you then find a company that nurtures individual contributors. You can still become very senior in the industry as a Principle Engineer or Architect, with little or no people management required.

Finally do your best to build a good relationship with your manager. This starts with being reliable and prepared, but take it to the next level by understanding what are your manager’s biggest problems and frustrations, and do what you can to solve them. If you struggle to communicate with your manager then identify someone with whom they have a good relationship, analyse why and emulate this. Try to understand the business strategy and identify opportunities and threats.

A proactive, reliable employee who understands their manager will get the interesting projects and rapid career progression.

In summary
  • Start building things straightaway
  • Be passionate in your interview
  • Embrace trial and error, don’t be afraid to make mistakes
  • Get involved in the developer community
  • Don’t stay too long in a job in which you’re not learning
  • Get on the same wavelength as your manager for good, long-term prospects
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview