Web Coding Articles, How-To's and Guides

Unit Testing and why it should be used

Author: Alicia Sykes
Date: Sunday 11th October 2015
Read on Blogger


What is Unit Testing?

Unit testing is where the program is broken down into a series of units, or functions, or areas - each of these is tested individually/standalone in a lot of detail. This allows us to check that each function works as it should. So if we give one of our methods/functions a set of inputs we can then verify that we get the expected output for each example.

We will need to check our functions with not only predictable values, but also borderline values and just as importantly totally unexpected values (e.g. wrong datatype, size, empty, irrelevant...)

Only the characteristics of that unit need to be tested, as everything else in the application will be covered by the other unit tests.

It is usually an automated process, once the tests are written they will run automatically when the tests are configured to run.

Benefits of Unit Testing


  • Identify failures in our code BEFORE it gets integrated with the larger application
  • Allows you to continue verifying that your method still works as expected in all (tested) cases while your refactor/ change the logic in the methods body
  • If your approaching development from a unit test perspective, you'll likely be writing code that is easier to test - more modular, clear, standalone methods - this is better code.
  • Prevent future changes from breaking functionality.
  • They help you really understand the design of your code
  • They give you instant feedback, and that green tick when they all pass is so satisfying!
  • Faster to develop more robust code
  • They can help with code reuse
  • Forces better code documentation
Here are some more reasons why you should test, from the SO community:

Disadvantages of Unit Testing

  • When your just getting started, they can be time consuming while you get used to them. Ultimately they will save time in the long run, but it doesn't always feel like that.
  • Learning curve. Although the principle of unit testing is very simple, when you actually sit down to write unit tests for the first time, it is often hard to know that and how you should be testing each module. A solution to this is to look at example tests on the internet. For example all decent Node.js projects and modules on GitHub will have a test directory, that you  can read or run.
  • Trying to implement unit tests to legacy code/ code not written nicely is sometimes close to impossible. The solution to this, is don't wirte bad code in the first place and write tests first or during development.
As you can see the advantages massively outweigh the very weak disadvantages. 

Unit testing confirms the code your writing is awesome!


A good unit test is:

  • Able to be fully automated
  • Has full control over all the pieces running (Use mocks or stubs to achieve this isolation when needed)
  • Can be run in any order  if part of many other tests
  • Runs in memory (no DB or File access, for example)
  • Consistently returns the same result (You always run the same test, so no random numbers, for example. save those for integration or range tests)
  • Runs fast
  • Tests a single logical concept in the system
  • Readable
  • Maintainable
  • Trustworthy (when you see its result, you don’t need to debug the code just to be sure)


Links for Further Reading

Unit testing in AngularJS

Unit testing in PHP

Unit testing in Android

Unit testing in NodeJs

Unit testing in Swift




Read More

Coverage testing in Node.js with Istanbul

Author: Alicia Sykes
Date: Sunday 4th October 2015
Read on Blogger


What is Coverage Testing?

Coverage testing determines what proportion of your source code is covered by your tests. 

It's useful to be able to check this as your developing/writing tests/testing so that you can aim as close as possible for 100%

How do I use Coverage Testing in my Node.js / JavaScript application?

It is really easy to run coverage tests in your project, even easier if you are using a testing framework and have everything set up. 

There is a Node module called Istanbul, which takes care of everything for you. 
Read more about Istanbul here: https://github.com/gotwarlost/istanbul

1. Install Istanbul both globally and then add to your dev dependencies
npm install istanbul --save-dev
npm install istanbul --global

2. Run a quick coverage test on one of your test files like so
istanbul cover test/my-test.js

3. Add a script tag to your package.json to run a coverage command on ALL test's at once, easily 
"scripts": {
    "start": "node app.js",
    "test": "mocha",
    "cover": "istanbul cover node_modules/mocha/bin/_mocha --dir ./reports/coverage"
  }

(this works great for Mocha setup, and on Windows, may be slightly different for other testing frameworks)

4. Run your coverage tests!
npm run cover

You should see a nice summary of your coverage in the console. 
Also check out the more detailed HTML report that Istanbul kindly created for you. 
Don't forget to add the reports directory to your .gitignore !





Check out the example project on GitHub:

Read More

Setting up a unit testing environment in Node.js

Author: Alicia Sykes
Date: Sunday 27th September 2015
Read on Blogger


Example Project

Introduction

In this post we'll go through the complete process of setting up your test environment in a Node.js app, and then write a few simple unit tests. Although I've aimed this for Node apps, you should be able to follow these steps for any JavaScript setup such as Ionic.

Before reading reading this article you should already know:

1. Setting up a Testing Framework

A testing framework will take care of the overall test structure for us and make it easy to run our tests and allow us to use additional plugins too if necissary (such as generating HTML test reports, or providing coverage test features). It is possible to do all this with vanilla JavaScript, but a lot more work

The two most popular test frameworks are Mocha and Jasmine. For this we will be using Mocha.

Since we will use the 'mocha' command, we must first install Mocha globally. If not you will get the error message 'mocha is not recognized as an internal or external command' (or mac equivalent).
npm install mocha --global

If your using git, or have multiple developers working on the project you'll also want to add mocha to your package.json so others can just run npm install to populate their node_modules.
First initialize your package.json (if you haven't already done so):
npm init

Then add mocha to your devDepenencies and save in node_modules by running
npm install mocha --save-dev

Next we need to create a directory to store our tests. By default mocha will look for a folder called 'test' inside the projects root, so we must use this name exactly.
mkdir test

Summary

The following commands can be used to setup mocha in a new project. (mac commands might be slightly different)
mkdir test-example && cd test-example :: create new projet director

npm i mocha -g :: install mocha globally so the mocha command can be used

npm init :: initialise package.json

npm i mocha -D ::  add mocha to your projects devDependancies

mkdir test :: create new folder for tests


2. Setting up an Assertion Library

Node does have some assertion test functionality built in, although it is quite basic, and not particularly nice to use. So instead we are going to use Chai as an assertion library - there several others out there, but chai is good and well established with good documentation.
So first off we need to install chai to our project like we do with all node modules:
npm install chai --save-dev

Next we are going to create a file to put our test in. Remember this should be inside the test directory. The file can be called what ever you like (but keep it relevant), and the extension should be .js (or .coffee if your writing your tests in CoffeeScript).

Inside your new test file, the first step is, as usually done with node modules, to include the module in your file.
var chai = require('chai');

If you visit the Chai website(http://chaijs.com/) you will see that chai has three interfaces: Should, Expect and Assert.

It is up to you which one you use, and it's easily possible to use a combination. They all work in a similar way, with the only main difference being the syntax and the structure of the test blocks you write. Should is much more like English, and Assert is more like conventional JUnit comparisons, Expect is somewhere in between.

So if we were using expect, the fist thing we would need to do is define the expect method.
var expect = chai.expect;

The format of your tests will have a describe block for what the code module should have been tested for, followed by a series of it's containing the Chai tests, in this format:
describe('JavaScript example test', function(){
    it('should return true in JavaScript',function(){
        expect(true).equal(true);
    });
});


The format of an expect method is like this:
expect(foo).to.be.a('string');
expect(foo).to.equal('bar');
expect(foo).to.have.length(3);
expect(tea).to.have.property('flavors')
  .with.length(3);

Example from the Chai documentation, view more here: http://chaijs.com/guide/styles/#expect


3. Running Tests, and specifying additional options

Once you have a sample test, like above. You can run it to see the result.
To run all tests in the command line, use the following command:
mocha

Running tests from a single file

If you'd like to run just tests from a single file, you can type mocha followed by the path to the test file.

Specifying test path in package.json

It's good practice to specify a command to execute your tests in your package.json. 
This will allow other developers to run 'npm test' on any project whatever testing framework or setup is implemented.
In the scripts section, where you've specified the entry point, add a test command:
"scripts": {
    "start": "node ./bin/www",
    "test": "mocha",
  }

This will mean you can run npm test and npm will run the mocha command.

Passing additional parameters to mocha

With mocha, it's possible to pass it additional parameters to specify options such as how tests are run. 
You can change things like, how results are displayed, what language you write your tests in (e.g coffee), whether it should look in sub-directories or not, the timeout etc......
This can be done with the ordinary flag syntax, e.g.
mocha --reporter nyan --recursive

You can see a full list of flags here: https://docs.npmjs.com/misc/config

The above command will set the reporter (how results are displayed) to be Nyan cat (check it out, it's pretty cool), and the recursive flag will mean that mocha will also execute tests in sub-directories.

But, it's a bit of a pain having to specify all those flags instead of just calling the mocha command. So you can instead create a file called 'mocha.opts' inside your test directory. In this file you can specify a list of flags. Then you can just run the mocha command, without typing any parameters in the command line.

Put each flag on a new line, and use long flags rather than short-hand so that it's clear for other developers. Here is an example mocha.opts file:
https://github.com/mochajs/mocha/blob/master/test/mocha.opts

4. Coverage Testing

See coverage testing confluence article here: 



Read More

Introduction to Test Driven Development

Author: Alicia Sykes
Date: Sunday 20th September 2015
Read on Blogger


First off, what do tests provide us with?

  • Documentation code
  • Catch future errors
  • Long-term time savings - because errors have been found before anythings been deployed to production
Although all the above are true, using tests like this is just a tool - not a process.

What is TDD?

In it's simplest form TDD comes down to the following process:
  • Decide what the code will do
  • Write a test that will pass if the code does that thing
  • Run the test, to prove it will fail
  • Write the code
  • Run the test again, to see it pass
It's important to note that you must not write the code, until you've written the test.
Also it is essential to ensure the test actually fails first, it is surprisingly easy to make a small mistake in your test case that means your test will always pass, and that's not the type of error that anyone likely to look into. It's also sometimes necessary to back-test, where you break the code to show that the test fails.
This should be done for every couple of lines of code, every method.

What does TDD provide?

  • Design and plan before you code
  • Documenting your design before you build it
  • Proving that the code implements that design
  • Encouraging the design of testable code - very important!!
Testable code, is good code!
This is because if you have long methods/ functions with loads of if statements and stuff, it's just not possible to write tests. If you write the tests first, you can't write the code that is too complicated.

Testable code is:

  • Modular, as we're forced to break things down so we can test them
  • Decoupled design, if our objects or methods are too tightly interwoven, we can't test them independantly.
  • Methods should have limited scope, and not trying to do too much in one place
  • etc... 
Basically good testable code will have a much lower cyclomatic complexity. This is the measure of how many different paths there are through the code, so essentially every conditional statement you add, will give you another route and another set of tests.

If your finding the test complicated to write, then that's a code smell, your going about it the wrong way.

Result of TDD

Better code in less time *

*so it might not feel like it's going faster, because it's a process rather than just hacking. And processes feel tedious. Also it may take some practice to get up to speed, but it's fully worth it in the long run for speed and code quality.

Use your judgement about when to test

Although nearly all code should be tested thoroughly there are some exceptions:
  • Some things are too hard to test - especially where external services are involved
  • Some tests are too trivial to be useful
  • Over-testing is possible 
  • Exploritory codeing, whern your not sure how it's going to be used. SO not for production code.

Links for learning more about TDD


Read More

How to write a gulpfile

Author: Alicia Sykes
Date: Sunday 13th September 2015
Read on Blogger


Setting up a new project and getting it ready for Gulp

Gulp is simple to set up. Presuming you have Node.js already installed:
  • In the command line, navigate into the root of your project working directory
  • Install Gulp with npm install gulp --save-dev what this will do will add gulp into your node_modules folder. The --save-dev part will add gulp to your devDependacies in your package.json file. It is similar to --save/-s only it's a dependency that's only required for development of your app.
  • Create a new JavaScript file in your project root directory called gulpfile.js 
  • This is the file where we will put all our build configuration in...

How to write the gulpfile.js

Install Plugins

Firstly you'll need to install and include the plugins you need. Every task in Gulp uses a plugin. For this example we'll be compiling sass. 
If you haven't already done so, in your console run:
npm install gulp --save-dev

npm install gulp-sass --save-dev

This will install both gulp and our first plugin in, gulp-sass. It will also add the dev dependency to our package.json

Require Necessary Modules

Back to the gulpfile.js in the same way that you'd use any other node module we need to include it. So in the top of your gulpfile.js paste the following code:
var gulp = require('gulp');

var sass = require('gulp-sass');

Creating a gulp task

Now we need to actually write the code to tell gulp what to do with this plugin.
To do this we call the task method in gulp. This method takes to parameters, firstly a string which can be what ever you want to call your task (in this example I called is sass - seems to make sense). 
Secondly we pass it a function that does the work. The format of this function is:
  • First pass is a glob of which files and folders to look for, this is done with the gulp.src method (in this example it's all files withing the scss folder with the file extension .scss).
  • Then we call gulp's pipe method on that file selection, where we pass it the plugin as a parameter.
  • Then finally we give gulp a destination location where the processed files should be saved. We do this using the gulp.dest method.  

gulp.task('sass', function() {
    return gulp.src('scss/*.scss')
        .pipe(sass())
        .pipe(gulp.dest('css'));
});


Running the Gulp task

Try testing out what we wrote above by running the following command in the console
gulp sass
This will look for the gulpfile.js in current directory, then look for the task called 'sass' and run it.
What you should see is all your sass code inside the sccss directory is compiled to css and aved in your css directory.

Watching files for changes

Now what would be really good is if gulp could just wait until everytime we make a change to our sass and then compile it into CSS automatically. This is actually really easy to set up using gulp watch.
gulp.task('watch', function() {
    gulp.watch('scss/*.scss', ['sass']);
});

We have named this task 'watch', and what it is doing is watching for changes in .scss files inside the scss folder and then running the 'sass' task.
If you also had a coffeescript task, you could just add another line inside this method looking something like this:
gulp.watch('cscripts/*.coffee', ['coffee']);


Including Multiple Plugins in a single task

It's strait-forward to run multiple operations at once. For example process all your scripts in one task, or process all your images in another. e.g.
gulp.task('scripts', function() {
    return gulp.src('js/*.js')
        .pipe(concat('all.js'))
        .pipe(gulp.dest('dist'))
        .pipe(rename('all.min.js'))
        .pipe(uglify())
        .pipe(gulp.dest('dist'));
});

Default Task

If we name a gulp task 'default' it becomes the default task and you can run it by simply running
gulp
(instead of gulp task-name)

We can set our default task to run several tasks for us. For example:
gulp.task('default', ['sass', 'lint', 'coffee', 'watch']);

(presuming you already have a 'sass', 'lint', 'coffee' and 'watch' task) it will run all the listed task.

Prerequisite Tasks

In a similar way, you can set prerequisite tasks to run, by listing them after the task name.
gulp.task('coffee', ['coffee-lint'], function(){
return gulp.src('cs/*.coffee')
        .pipe(coffee())
        .pipe(gulp.dest('dist'));
});





Read More

Introduction to automating your tasks with the gulp.js build tool

Author: Alicia Sykes
Date: Sunday 6th September 2015
Read on Blogger


What is Gulp?

Gulp.js is a streaming build system built on Node.js. This basically means that it can be configured to perform repetitive tasks and coding operations automatically during development. For example it can compile all your coffee scr
ipt whenever your file changes, or it can minify your CSS, or maybe synchronize all your development browsers and constantly refresh them on file change. 

Gulp uses a variety of plugins to do these tasks, and there is a plugin to do pretty much everything you'd need to do very easily. If you can't find one to do a particular operation, you can make your own ;)

Why do I need to use a build system?

For all modern web applications (hybrid apps, sites, web backends...) there are certain tasks that are almost essential to ensure high quality. For example checking your JavaScript for errors, minifying it, concatenating it. There are also certain tasks that just make developing easier, like having your your app tested in every browser and screen size whenever your file changes or monitoring file sizes and network requests.

It is true that it is possible to do most of these tasks without a build system or tool in place, but using something like Gulp or Grunt is much more efficient, easy to use, fast and keeps development code to a minimum and all in one place. 


Example Gulpfile for a typical Node.js Express app

I've created a Gulpfile for a typical Node Express project that uses coffee script and Less. It is just intended as a working example of how you can integrate everything together, so you can modify it to meet your specific project needs.


Setting up example project

  1. Open the console and navigate into a new working directory
  2. Run the command: git clone https://github.com/Lissy93/gulp-example.git
  3. Install the dependencies by running: npm install
  4. Start the gulp script bu running: gulp
So what the above steps should have done is: download the example project from GitHub, install all it's dependenceies found in the package.json and put them in the node_modules folder. Running gulp will then call the default task inside the gulpfile.js

What this project does

If you look in the gulpfile.js you'll see there's a whole load of tasks that are being covered. Mainly around processing the CSS and JavaScript ready for production. You can view the full list of tasks in the readme.md for the Git repo.

Testing it out

So once you've run the above commands in the terminal, if everything worked as it should have done your web browser should have opened. If it didn't try visiting http://localhost:4000. (If there is nothing, then check the console for errors.)

Browser Sync

If you open another browser and view the same URL you'll notice that the two browsers are in sync. So if you scroll down on one, the other will scroll, if you click a link on one all browsers will follow. This is really really useful testing your app out on a range of browsers and screen sizes all at once without having to even do any clicking, works better if you have a decent number of monitors ;) 
It's done using a gulp plugin called browser-sync.

Nodemon

Secondly you'll notice if you make any changes to any of the jade templates or views it will update live, across all your browsers as you code. No refreshing needed :) (you do need to set your IDE to autosave on keyup though, which should be default if your using any half decent ide). This is done using nodemon in gulp.

Linting, Compiling, Concatinating, Piping.... styles and scripts

Now for the coolest part, in your working directory open up the sources folder. If you edit any of the CSS, Less, JavaScript, CoffeeScript files you'll see that as it saves it creates a new version of the production code in your public directory, and refreshes the browsers accordingly. The code in the public directory is all minified and had everything else done to it to make it awesome and really efficient. Check the console for a list of all the tasks gulp has just done.

Exercises

  1. Try creating and modifying the JavaScript and CoffeeScript files in the javascript source directory, then look in the public directry and see what they're looking like in production form.
  2. In a similar way modify the CSS and Less file, you should see the changes in the browser
  3. Have a read through gulpfile.js and modify the configuration to suit your project, then test it out.
  4. Install a new gulp-plugin and set it up by seeing how the rest have been done
  5. Try running some of the tasks individually, for example gulp clean should just clean the public directory and gulp-watch should just watch for changes and update files accordingly.
If the console freezes, cancel the process (Ctrl+C) and rerun gulp.


Read More

Polymer and Moder Web API's

Author: Alicia Sykes
Date: Wednesday 24th June 2015
Read on Blogger


Polymer is part of the web platform team, and it officially began 3 years ago - but last week Google announced that 1.0 has been released, and ready for production. Previously building web apps across multiple platforms and form factors was really challenging, different components are not always designed to work together - the answer to this is web components.

Web components allow custom components to be used everywhere, and they are interoperable, meaning they add another layer of functionality above the platform but below other frameworks. Web components standardise everything.

Polymer is the library for building web components, it makes it fast and easy to build web components that can be used everywhere. Polymer is not a framework - because web components are not a framework, web components with polymer are not replacing anything else, they can work with everything else.

Polymer 1.0

Polymer 1.0 is brand new, every line of code has been re-written in the past year, so that it is considerably faster, less-complex and generally and better than the previous 0.x versions.  It is 3 times faster on Chrome (than previous versions), 4 times faster on mobile safari - and 30% less code overall. The whole thing is only (19kb - ) 42 kb, including all the polyfills...
1.0 also has a lot of new features. Firstly shady dom which replaces the shadow polymer dom. It is simpler implementation.

Another core new feature in 1.0 is teeming and styling with CSS custom properties. Web components introduced scoping and custom CSS selectors.


Polymer Elements

Initially there were two main branches of components in polymer. The iron elements and the paper elements. Google have introduced three new branches.

Firstly, the Google web components. So ifc you need to add Google maps for example, use the Google Map tag. There are elements for all of Googles core web services introduced. It's a new Google SDK for the web created through these elements.

A second branch of elements introduced are the platinum elements, these bring together powerful features such as service workers. So dropping push notifications on to your page, or offline cashing, or anything like that - just put the approprieate element into your page.

The gold elements, these include mobile and web ecomerce and high quality check out processes. Such as verifying credit card details.

Google have also created a catalogue of polymer elements https://elements.polymer-project.org/





Read Google's official blog announcement here

Read More

Introduction to react.js

Author: Alicia Sykes
Date: Monday 25th May 2015
Read on Blogger


React is a JavaScript framework built at Facebook, it was built to answer the question "How should we structure JavaScript applications".
There are a lot of JavaScript frameworks that try to answer this question, most of them are MVC based (or MVVM or MVW) - basically they're all based around models - which are just observable objects that have some events api that allows you to subscribe to some changes on that object. So developers set up bi-directional data-binding that allow you to subscribe to changes on you r model, so whenever something changes you can mutate and update your view.

React is a JavaScript library for building user interfaces, you get all the good parts of a complete render, but without the downsides such as performance and loss of data.

At the heart of react is, declarative components - describing what components look at at any point in time

Initial Render

There is no explicit data binding, in react we just define one render function, and the purpose of this render function is to describe what your view looks like in any point in time. It returns a representation of your view. We recursive call render to build up this hierarchy. When we want to generate the mark-up of this representation for the first time, we take the representation and iterate over it generate a string and inject it into the document. This does something called two-pass rendering which is generating the string, then later, after the string is injected into the document we attach the event handlers at the top-level, which exposes some really interesting opportunities, since your generating your string somewhere separate from where your hooking up your events, you can render on the server.

Update Rendering

Instead of mutation for updating react uses a process called reconciliation, the purpose of this is to keep you UI up-to-date as your data changes, automatically updates your views and DOM. The render function that does the initial rendering and returns a string representation of what our components should look like at that point in time, and react compares that with the current DOM and finds all the differences, based on those differences creates some DOM representations of just the relevant parts and updates the view.

Building DOM Representations

Since the HTML is defined in JavaScript it would get a bit hard to understand for larger pages with a lot of nesting, there would be curly braces everywhere, so for that reason JSX syntax is used to define the elements. This is very similar to other templating engines and uses ordinary HTML-type syntax.




This post is based on information given by Tom Occhino from Facebook on his series about react.js

Read More

How to create a web service to send emails for you Android, iOS or web application

Author: Alicia Sykes
Date: Monday 25th May 2015
Read on Blogger


Since it's a common task to have to send emails from your app, this post outlines the quickest way to get a mail service up and running  using server side JavaScript, Parse and Mandrill. No JavaScript or web coding experience is needed.

Set up Parse

1. Go to parse.com and create a cloud code app following the process
2. Download https://www.parse.com/downloads/windows/console/parse.zip
3. Extract the zip
4. Run parse console
5. cd into your working directory
6. run the command parse new <name-of-project>
7. make changes to your code if you like
8. run the command parse deploy
done


Set up Mandrill

Mandrill is a is an email infrastructure service by MailChimp. It's free to use up to a limit of 12,000 emails per month (and 250 per hour) and it's easy to set up.
Head over to https://mandrill.com/ and sign up to get an API key.

The Code

Inside your new Parse project, there should be a folder called cloud, cd into that and create a file called main.js (if it doesn't already exist).
Paste the following code into cloud/main.js

Parse.Cloud.define("sendMail", function(request, response) {
    var Mandrill = require('mandrill');
    Mandrill.initialize('<Manddrill_api_key>');
    Mandrill.sendEmail({
        message: {
            text: request.params.text,
            subject: request.params.subject,
            from_email: request.params.fromEmail,
            from_name: request.params.fromName,
            to: [{
                email: request.params.toEmail,
                name: request.params.toName
            }]
        },
        async: true
    }, {
        success: function(httpResponse) {
            console.log(httpResponse);
            response.success("Email sent!");
        },
        error: function(httpResponse) {
            console.error(httpResponse);
            response.error("ERROR - mail failed to send");
        }
    });
});

And change the <api key> to your Mandrill API key (obviously withoiut the pointy brackets)
Once this id done, run parse deploy to push your work to parse.

Calling your service

Below is all the paramaters you'll need to send emails from your application.

URL:
https://api.parse.com/1/functions/sendMail
 
URL Parameter Key
Value
Content-Type
 
application/json
 
Accept
 
application/json
 
X-Parse-Application-Id
<Your_parse_application_id>
X-Parse-REST-API-Key
<Your_parse_rest_api_key>
 
 
Raw JSON body:
   "toEmail":"[email protected]",
   "toName":"jane doe",
   "fromEmail":"[email protected]",
   "fromName":"john smith",
   "text":"this is the email body for the main message",
   "subject":"this is the email subject"
}
 

You will probably want to test this out before you include it in your app. A good way to do this is to use the PostMan client availible free on the Chrome store (similar versions will be out there for Firefox and Safari). 

Fill in the form so that it looks like the image below, and you should see your new email service working :)



Read More

Search





Add