Learning a new language involves a series of steps, whereas its mastery is a product of patience, practice, mistakes, and experience.
Some developers will have enough knowledge to deliver on features as per a client’s demand, but it takes more than just that to be a good developer.
A good developer is one who takes time to go back and get a good grasp of a language’s underlying/core concepts.
Today we take a deeper look at JavaScript closures and hope that the knowledge you learn will be beneficial in your projects.
A JavaScript Closure is when an inner function has access to members of the outer function (lexical scope) even when executing outside the scope of the outer function.
Therefore, we cannot afford to talk about closure while leaving out functions and scope.
Scope refers to the extent of visibility of a variable defined in a program. Ways to create scope in JavaScript are through: try-catch blocks
, functions
, the let keyword
with curly braces among others. We mainly have two variations of scope: the global scope and local scope.
var initialBalance = 0 // Global Scope
function deposit (amount) {
/**
* Local Scope
* Code here has access to anything declared in the global scope
*/
var newBalance = parseInt(initialBalance) + parseInt(amount)
return newBalance
}
Each function in JavaScript creates its own local scope when declared.
This means that whatever is declared inside the function’s local scope is not accessible from the outside. Consider the illustration below:
var initialBalance = 300 // Variable declared in the Global Scope
function withdraw (amount) {
var balance // Variable declared in function scope
balance = parseInt(initialBalance) - parseInt(amount)
return balance
}
console.log(initialBalance) // Will output initialBalance value as it is declared in the global scope
console.log(balance) // ReferenceError: Can't find variable: balance
JavaScript’s Lexical Scope is determined during the compile phase. It sets the scope of a variable so that it may only be called/referenced from within the block of code in which it is defined.
A function declared inside a surrounding function block has access to variables in the surrounding function’s lexical scope.
var initialBalance = 300 // Global Scope
function withdraw (amount) {
/**
* Local Scope
* Code here has access to anything declared in the global scope
*/
var balance = parseInt(initialBalance) - parseInt(amount)
const actualBalance = (function () {
const TRANSACTIONCOST = 35
return balance - TRANSACTIONCOST /**
* Accesses balance variable from the lexical scope
*/
})() // Immediately Invoked Function expression. IIFE
// console.log(TRANSACTIONCOST) // ReferenceError: Can't find variable: TRANSACTIONCOST
return actualBalance
}
Invoking an inner function outside of its enclosing function and yet maintain access to variables in its enclosing function (lexical scope) creates a JavaScript Closure.
function person () {
var name = 'Paul' // Local variable
var actions = {
speak: function () {
// new function scope
console.log('My name is ', name) /**
* Accessing the name variable from the outer function scope (lexical scope)
*/
}
} // actions object with a function
return actions /**
* We return the actions object
* We then can invoke the speak function outside this scope
*/
}
person().speak() // Inner function invoked outside its lexical Scope
A Closure allows us to expose a public interface while at the same time hiding and preserving execution context from the outside scope.
Some JavaScript design patterns make use of closures.
One of these well-implemented patterns is the module pattern, this pattern allows you to emulate: private, public, and privileged members.
var Module = (function () {
var foo = 'foo' // Private Property
function addToFoo (bam) { // Private Method
foo = bam
return foo
}
var publicInterface = {
bar: function () { // Public Method
return 'bar'
},
bam: function () { // Public Method
return addToFoo('bam') // Invoking the private method
}
}
return publicInterface // Object will contain public methods
})()
Module.bar() // bar
Module.bam() // bam
From our module pattern illustration above, only public methods and properties in the return object will be available outside the closure’s execution context.
All private members will still exist as their execution context is preserved but hidden from the outside scope.
When we pass a function into a setTimeout
or any kind of callback. The function still remembers the lexical scope because of the closure.
function foo () {
var bar = 'bar'
setTimeout(function () {
console.log(bar)
}, 1000)
}
foo() // bar
Closure and loops
for (var i = 1; i <= 5; i++) {
(function (i) {
setTimeout(function () {
console.log(i)
}, i * 1000)
})(i)
}
/**
* Prints 1 through 5 after each second
* Closure enables us to remember the variable i
* An IIFE to pass in a new value of the variable i for each iteration
* IIFE (Immediately Invoked Function expression)
*/
for (let i = 1; i <= 5; i++) {
(function (i) {
setTimeout(function () {
console.log(i)
}, i * 1000)
})(i)
}
/**
* Prints 1 through 5 after each second
* Closure enabling us to remember the variable i
* The let keyword rebinds the value of i for each iteration
*/
I bet we now have an understanding of closures and can do the following:
Until next time, happy coding.
]]>Developers are a lazy bunch. Or at least I assume we are. Because of this reason we tend to build tools that make our work faster. From highly customizable editors to task runners.
With gulp, we can build tasks that automatically compile Sass, start a Laravel server, live reload the browser, transpile ES6 to ES5, etc.
Thankfully, there are a few languages out there like Javascript which is very forgiving. Nonetheless, mistakes can happen.
Since we have a “gulp watcher” that watches our project and runs defined tasks when we make any change, an error can easily break our pipeline.
Watching in Gulp refers to triggering a task when a change is made to a project’s source.
So, before we watch a task, let’s create a task that we will use as our example throughout this tutorial. The task we will create is a SCSS compilation task.
We can create a new working directory, name it whatever you want. We can now create our gulpfile.js
in our working directory. Then we add our build task. Before we define our task, we need to install our dependencies.
For this article, here is a list of our dependencies.
{
"private": true,
"devDependencies": {
"gulp": "^3.9.1",
"gulp-notify": "^2.2.0",
"gulp-plumber": "^1.1.0",
"gulp-sass": "^2.3.2",
"gulp-util": "^3.0.7"
}
}
Now that we have our dependency list, we can run npm install
or if you have the new yarn package manager based on npm, you can run yarn install
.
In the gulpfile, we can then define our gulp task.
const gulp = require('gulp');
const sass = require('gulp-sass');
gulp.task('compile-scss', function () {
gulp.src('scss/main.scss')
.pipe(sass())
.pipe(gulp.dest('css/'));
});
So from the command line, we can run gulp compile-scss
and our Sass file should be compiled.
Now that we have a task defined, let’s trigger the file whenever we make a change to the project’s source.
gulp.task('watch', function () {
gulp.watch('scss/**/*.scss', ['compile-scss']);
});
From the terminal, we can run gulp watch
and whenever a file ending with .scss
extension in any folder within the scss
directory gets changed, compile-scss
task is run.
We’ve got our task and watcher up and running, but if an error occurs in our SCSS file, the gulp watcher gets terminated. We then have to go back to the terminal and type gulp watch
again. This gets very annoying really fast. A silly little ;
can break our watcher.
To avoid breakage like this, we can one of three things:
One a way to go about dealing with errors is to “swallow the error”. The error(s) will be taken in by the application to prevent the task from breaking. Basically, errors will not be reported and the task will keep running.
Since gulp sends a lot of events, we can hook into the error event of the task we don’t want to fail.
gulp.task('compile-scss', function () {
gulp.src('scss/main.scss')
.pipe(sass())
.on('error', function (err) {
console.log(err.toString());
this.emit('end');
})
.pipe(gulp.dest('css/'));
});
As you can see above, from the on
listener on the task. The on
event listener takes in two parameters: the event and a function to be triggered when the event gets called. The function that gets called takes in the error object. We then log the stringified version of the error to the terminal.
It is absolutely important to this.emit('end')
, if this event is not triggered, the next pipe in this task pipeline never gets called, and the buffer will be left open.
This method involves using the gulp-util plugin.
The gulp-util plugin provides a lot of helpful methods, one of them is log
. With this method, we can log the error to the terminal. To use this, we attach an error event listener to the pipe.
var gutil = require('gulp-util');
gulp.task('compile-scss', function () {
gulp.src('scss/main.scss')
.pipe(sass())
.on('error', gutil.log)
.pipe(gulp.dest('css/'));
});
But this method also requires us to go through each pipe in the pipeline and attach .on('error', gutil.log)
listener to all tasks. Something like this.
gulp.task('compile-scss', function () {
gulp.src('scss/main.scss')
.pipe(sass())
.on('error', gutil.log)
.pipe(autoprefixer())
.on('error', gutil.log)
.pipe(gulp.dest('css/'));
});
Out of all three methods, this is my favorite. With gulp-plumber, we don’t need to go to each pipe and add a listener, we can just add a global listener to the task and have a meaningful error displayed.
var plumber = require('gulp-plumber');
gulp.task('compile-scss', function () {
gulp.src('scss/main.scss')
.pipe(plumber())
.pipe(sass())
.pipe(autoprefixer())
.pipe(cssnano())
.pipe(gulp.dest('css/'));
});
We can have multiple pipes in this task and still only ever need to call plumber once.
Now that we can see the errors without breaking out of watch, we need to find a way to get some kind of notification when an error occurs. There are several ways to do this, but I will cover only one method.
The method I will cover in this article: will play a beeping sound when an error occurs, and also show a system notification that looks like this.
This notification looks different according to your operating system.
To get this feature to work, we need to extend the gulp-plumber plugin. So in our gulp task, we update our call to plumber.
gulp.task('scss', function () {
gulp.src('scss/main.scss')
.pipe(plumber({ errorHandler: function() {
// do stuff here
}}))
.pipe(sass())
.pipe(gulp.dest('css'));
});
Notice, we pass an object that has an errorHandler
property that takes a closure to plumber. We can then call our notify plugin in that closure.
var notify = require('gulp-notify');
gulp.task('scss', function () {
gulp.src('scss/main.scss')
.pipe(plumber({ errorHandler: function(err) {
notify.onError({
title: "Gulp error in " + err.plugin,
message: err.toString()
})(err);
}}))
.pipe(sass())
.pipe(gulp.dest('css'));
});
We call the notify plugin and pass it an object that has a title
and message
property. Now, when an error occurs, a notification is triggered. To play a beeping sound, we can use **gulp-util ** for that.
var notify = require('gulp-notify');
gulp.task('scss', function () {
gulp.src('scss/main.scss')
.pipe(plumber({ errorHandler: function(err) {
notify.onError({
title: "Gulp error in " + err.plugin,
message: err.toString()
})(err);
// play a sound once
gutil.beep();
}}))
.pipe(sass())
.pipe(gulp.dest('css'));
});
Now, when an error occurs, we get both sound and system notification, and then you can check your terminal for more information.
The configuration in this article should be suitable for most users, but if you have any suggestions/improvements, please let us know in the comments.
]]>Git, a version control system created by Linus Torvalds, author of the Linux kernel, has become one of the most popular version control systems used globally. Certainly, this is because of its distributed nature, high performance, and reliability.
In this tutorial, we’ll look at git hooks. These hooks are a feature of git which furthers its extensibility by allowing developers to create event-triggered scripts.
We’ll look through the different types of git hooks and implement a few to get you well on the way to customizing your own.
A git hook is a script that git executes before or after a relevant git event or action is triggered.
Throughout the developer version control workflow, git hooks enable you to customize git’s internal behavior when certain events are triggered.
They can be used to perform actions such as:
This proves extremely helpful for developers as git gives them the flexibility to fine-tune their development environment and automate development.
Before we get started, there are a few key programs we need to install.
Confirm that you’ve installed them correctly by running the following in your terminal:
- git --version && node --version && bash --version
You should see similar results
- git version 2.7.4 (Apple Git-66)
- v6.2.2
- GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
- Copyright (C) 2007 Free Software Foundation, Inc.
We’ll be using the following directory structure, so go ahead and lay out your project like this.
+-- git-hooks
+-- custom-hooks
+-- src
| +-- index.js
+-- test
| +-- test.js
+-- .jscsrc
That’s all for now as far as prerequisites go, so let’s dive in.
git hooks can be categorized into two main types. These are:
In this tutorial, we’ll focus more on client-side hooks. However, we will briefly discuss server-side hooks.
These are hooks installed and maintained on the developer’s local repository and are executed when events on the local repository are triggered. Because they are maintained locally, they are also known as local hooks.
Since they are local, they cannot be used as a way to enforce universal commit policies on a remote repository as each developer can alter their hooks. However, they make it easier for developers to adhere to workflow guidelines like linting and commit message guides.
Initialize the project we just created as a git repository by running
- git init
Next, let’s navigate to the .git/hooks
directory in our project and expose the contents of the folder
- cd ./.git/hooks && ls
We’ll notice a few files inside the hooks directory, namely
applypatch-msg.sample
commit-msg.sample
post-update.sample
pre-applypatch.sample
pre-commit.sample
pre-push.sample
pre-rebase.sample
prepare-commit-msg.sample
update.sample
These scripts are the default hooks that git has so helpfully gifted us with. Notice that their names make reference to git events like pushes, commits, and rebases.
Useful in their own right, they also serve as a guideline on how hooks for certain events can be triggered.
The .sample
extension prevents them from being run, so to enable them, remove the .sample
extension from the script name.
The hooks we’ll write here will be in bash though you can use Python or even Perl. Git hooks can be written in any language as long as the file is executable.
We make the hook executable by using the chmod utility.
- chmod +x .git/hooks/<insert-hook-name-here>
Mimicking the developer workflow for the commit process, hooks are executed in the following hierarchy.
<pre-commit>
|
<prepare-commit-msg>
|
<commit-msg>
|
<post-commit>
The pre-commit hook is executed before git asks the developer for a commit message or creates a commit package. This hook can be used to make sure certain checks pass before a commit can be considered worthy to be made to the remote. No arguments are passed to the pre-commit script and if the script exists with a non-zero value, the commit event will be aborted.
Before we get into anything heavy, let’s create a simple pre-commit hook to get us comfortable.
Create a pre-commit hook inside the .git/hooks
directory like this.
- touch pre-commit && vi pre-commit
Enter the following into the pre-commit hook file
#!/bin/bash
echo "Can you make a commit? Well, it depends."
exit 1
Save and exit the editor by running:
- esc then :wq
Don’t forget to make the hook file executable by running:
- chmod + x .git/hooks/pre-commit
Let’s write out some code to test our newly minted hook against. At the root of our project, create a file called hello-world.py
:
- touch hello-world.py
Inside the file, enter the following:
print ('Hello Hooks') # python v3
# print 'Hello Hooks' # python v2
Next, let’s add the file into our git staging environment and begin a commit.
- git add . && git commit
Are you surprised that git doesn’t let us commit our work?
As an experiment, modify the last line in the pre-commit
hook we created from exit 1
to exit 0
and trigger another commit.
Now that we understand that a hook is just an event-triggered script, let’s create something with more utility.
In our example below, we want to make sure that all the tests for our code pass and that we have no linting errors before we commit.
We’re using mocha as our javascript test framework and jscs as our linter.
Fill the following into the .git/hooks/pre-commit
file
#!/bin/bash
# Exits with non zero status if tests fail or linting errors exist
num_of_failures=`mocha -R json | grep failures -m 1 | awk '{print $2}' | sed 's/[,]/''/'`
errors=`jscs -r inline ./test/test.js`
num_of_linting_errors=`jscs -r junit ./test/test.js | grep failures -m 1 | awk '{print $4}' | sed 's/failures=/''/' | sed s/">"/''/ | sed s/\"/''/ | sed s/\"/''/`
if [ $num_of_failures != '0' ]; then
echo "$num_of_failures tests have failed. You cannot commit until all tests pass.
Commit exiting with a non-zero status."
exit 1
fi
if [ $num_of_linting_errors != '0' ]; then
echo "Linting errors present. $errors"
exit 1
fi
Save the document and exit the vi editor as usual by using,
- esc then :wq
The first line of the script indicates that we want the script to be run as a bash script. If the script was a python one, we would instead use
- #!/usr/bin/env python
Make the file executable as we mentioned before by running
- chmod +x .git/hooks/pre-commit
To give our commit hook something to test against, we’ll be creating a method that returns true
when an input string contains vowels and false
otherwise.
Create and populate a package.json
file at the root of our git-hooks folder by running
- npm init --yes
Install the project dependencies like this:
- npm install chai mocha jscs --save-dev
Let’s write a test for our prospective hasVowels
method.
git-hooks/test/test.js
const expect = require('chai').expect;
require('../src/index');
describe('Test hasVowels', () => {
it('should return false if the string has no vowels', () => {
expect('N VWLS'.hasVowels()).to.equal(false);
});
it('should return true if the string has vowels', () => {
expect('No vowels'.hasVowels()).to.equal(true)
// Introduce failing test
expect('Has vowels'.hasVowels()).to.equal(false);
});
});
git-hooks/src/index.js
// Method returns true if a vowel exists in the input string. Returns false otherwise.
String.prototype.hasVowels = function hasVowels() {
const vowels = new RegExp('[aeiou]', 'i');
return vowels.test(this);
};
To configure the jscs linter, fill the following into the .jscsrc
file we’d created in the beginning.
.jscsrc
{
"preset": "airbnb",
"disallowMultipleLineBreaks": null,
"requireSemicolons": true
}
Now add all the created files into the staging environment and trigger a commit.
- git add . && git commit
What do you think will happen?
You’re right. Git prevents us from making a commit. Rightfully so, because our tests have failed. Worry not. Our pre-commit script has helpfully provided us with hints regarding what could be wrong.
This is what it tells us:
1 tests have failed. You cannot commit until all tests pass.
Commit exiting with a non-zero status.
If you can’t take my word for it, the screenshot below serves as confirmation.
Let’s fix things. Edit line 13 in test/test.js
to
expect('Has vowels'.hasVowels()).to.equal(true);
Next, add the files to your staging environment, git add .
like we did before, and git commit
Git still prevents us from committing.
Linting errors present. ./test/test.js: line 10, col 49, requireSemicolons: Missing semicolon after statement
Edit line 10 in test/test.js
to
expect('No vowels'.hasVowels()).to.equal(true);
Now, running git commit
after git add .
should provide no challenges because our tests and linting have both passed.
You can skip the pre-commit hook by running git commit --no-verify
.
The prepare-commit-msg hook is executed after the pre-commit hook and its execution populates the vi editor commit message.
This hook takes one, two, or three arguments.
In the code below, we’re electing to populate the commit editor workspace with a helpful commit message format reminder prefaced by the name of the current branch.
.git/hooks/prepare-commit-msg
#!/bin/bash
# Result will be output in place of the default commit message on running git commit
current_branch=`git rev-parse --abbrev-ref HEAD`
echo "#$current_branch Commit messages should be of the form [#StoryID:CommitType] Commit Message." > $1
Running git commit
will yield the following in the commit text editor
#$main Commit messages should be of the form [#StoryID:CommitType] Commit Message.
We can continue to edit our commit message and exit out of the editor as usual.
This hook is executed after the prepare-commit-msg hook. It can be used to reformat the commit message after it has been input or to validate the message against some checks. For example, it could be used to check for commit message spelling errors or length, before the commit is allowed.
This hook takes one argument, that is the location of the file that holds the commit message.
.git/hooks/commit-msg
#!/bin/bash
# Validates whether commit message is of a certain format.
# Aborts commit if message is unsatisfactory
# Standard commit from Pivotal Tracker [#135316555:Feature]Create Kafka Audit Trail
commit_standard_regex='[#[0-9]{9,}:[a-z]{3,}]:[a-z].+|merge'
error_message="Aborting commit. Please ensure your commit message meets the
standard requirement. '[#StoryID:CommitType]Commit Message'
Use '[#135316555:Feature]Create Kafka Audit Trail' for reference"
if ! grep -iqE "$commit_standard_regex" "$1"; then
echo "$error_message" >&2
exit 1
fi
In the code above, we’re validating the user-supplied commit message against a standard commit using a regular expression. If the supplied commit does not conform to the regular expression, an error message is directed to the shell’s standard output, the script exits with a status of one, and the commit is aborted.
Go ahead. Create a change and try to make a commit of a form other than [#135316555:Chore]Test commit-msg hook
Git will abort the commit process and give you a handly little tip regarding the format of your commit message.
This hook is executed after the commit-msg hook and since the commit has already been made it cannot abort the commit process.
It can however be used to notify the relevant stakeholders that a commit has been made to the remote repository. We could write a post-commit hook, say, to email our project team lead whenever we make a commit to the organization’s remote repository.
In this case, let’s congratulate ourselves on our hard work.
.git/hooks/post-commit
#!/bin/bash
say Congratulations! You\'ve just made a commit! Time for a break.
The post-checkout hook is executed after a successful git checkout is performed. It can be used to conveniently delete temporary files or prepare the checked out development environment by performing installations.
Its exit status does not affect the checkout process.
In the hook below, before checkout to another branch, we’ll pull changes made by others on the remote branch and perform some installation.
.git/hooks/post-checkout
#!/bin/bash
# Executed immediately after a git checkout
repository_name=`basename`git rev-parse --show-toplevel``
current_branch=`git rev-parse --abbrev-ref HEAD`present_working_directory=`pwd`requirements=`ls | grep 'requirements.txt' `echo "Pulling remote branch ....."
git pull origin $current_branch
echo
echo "Installing nodeJS dependencies ....."
npm install
echo
echo "Installing yarn package ....."
npm install yarn
echo "Yarning dependencies ......"
yarn
echo
# Only do this if you find a requirements.txt file at the root of the project
if [ $present_working_directory == $repository_name ] && [ $requirements == 'requirements.txt']; then
echo "Creating virtual environments for project ......."
source`which virtualenv`
echo
mkvirtualenv $repository_name/$current_branch
workon $repository_name/$current_branch
echo "Installing python dependencies ......."
pip install -r requirements.txt
fi
Don’t forget to make the script executable.
To test the script out, create another branch and check it out like this.
- git checkout -b <new-branch>
This hook is executed before a rebase and can be used to stop the rebase if it is not desirable.
It takes one or two parameters:
Let’s outlaw all rebasing on our repository.
.git/hooks/pre-rebase
#!/bin/bash
echo " No rebasing until we grow up. Aborting rebase."
exit 1
Phew! We’ve gone through quite a number of client-side hooks. If you’re still with me, good work!
I’ve got some bad news and good news. Which one would you like first?
The bad
The .git/hooks
directory is not tagged by version control and so does not persist when we clone a remote repository or when we push changes to a remote repository. This is why we’d earlier stated that local hooks cannot be used to enforce commit policies.
The good
Now before you start sweating, there are a few ways we can get around this.
.git/hooks
folder.Create a pre-rebase file in our custom-hooks
directory and copy the pre-rebase hook we created in .git/hooks/pre-rebase
into it. Next, the rm
command removes the pre-rebase hook in .git/hooks
:
- touch custom-hooks/pre-rebase && cp .git/hooks/pre-rebase custom-hooks/pre-rebase && rm -f .git/hooks/pre-rebase
Next, use the ln
command to link the pre-rebase
file in custom-hooks
to the .git/hooks
directory.
- # ln -s <source> <target>
- ln -s custom-hooks/pre-rebase .git/hooks/pre-rebase
To confirm that the files have been linked, run the following
- ls -la .git/hooks
The output for the pre-rebase
file should be similar to this:
- lrwxr-xr-x 1 emabishi staff 23B Dec 27 14:57 pre-rebase -> custom-hooks/pre-rebase
Notice the l
character prefixing the filesystem file permissions line.
To unlink the two files,
- unlink .git/hooks/pre-rebase
or
- rm -f .git/hooks/pre-rebase
.git/hooks
directory. We’ve already done this by storing our pre-rebase hook in the custom-hooks
directory. Like our other files, this folder can be pushed to our remote repository.These are hooks that are executed in a remote repository on the triggering of certain events.
Is it clear now? Client-side hooks respond to events on a local repository whilst server-side hooks respond to events triggered on a remote repository.
We’d come across some of them when we listed the files in the .git/hooks
directory.
Let’s look at a few of these hooks now.
The server-side hooks we’ll look at here are executed with the following hierarchy.
<pre-receive>
|
<update>
|
<post-receive>
This hook is triggered on the remote repository just before the pushed files are updated and can abort the receive process if it exists with a non-zero status.
Since the hook is executed just before the remote is updated, it can be used to enforce commit policies and reject the entire commit if it is deemed unsatisfactory.
The update hook is called after the pre-receive hook and functions similarly. The difference is that ii filters each commit ref made to the remote repository independently. It can be used as a fine-tooth comb to reject or accept each ref being pushed.
This hook is triggered after an update has been done on the remote repository and so cannot abort the update process. Like the post-commit client-side hook, it can be used to trigger notifications on a successful remote repository update.
In fact, it is more suited for this because a log of the notifications will be stored on a remote server.
We’ve looked at quite a few hooks which should get you up and running. However, I’d love for you to do some more exploration.
For a more comprehensive look at git hooks, I’d like to direct you to:
It’s a brave new world out there when it comes to git hooks, so luckily, you don’t always have to write your own custom scripts. You can find a pretty comprehensive list of useful frameworks here.
All the code we’ve written can be found here.
]]>Out of the box, Laravel comes installed with a lot of helpful commands available to use in an application. But as your application grows, you might find it time-wasting, performing some tasks like populating databases with user data or products.
At this point, automating those tasks will go a long way to help you get data into your database seamlessly and facilitate the rapid completion of your web application.
Some of the default Artisan commands for Laravel include php artisan serve
, php artisan make:controller
, php artisan make:model
, and so on.
In this article, we will be creating an artisan command to populate the database with product data. This tutorial will not only show you how to create a custom artisan command but more so to read data from a CSV file, parse and store it in our database using the command we are going to create.
To get started with the custom artisan
command, I want to assume that you already have Laravel installed, if not quickly do that with the following commands. As of the time of writing this tutorial, Laravel 5.5 is being used.
- composer create-project --prefer-dist laravel/laravel command
The command will create a Laravel project called command
in your local directory. Feel free to change as preferred.
Now that you have installed Laravel, let’s proceed to build our own custom commands as stated earlier. To create a custom command, use the command:
- php artisan make:command productData
The intention is to create a custom command to populate the products table, hence the reason for the name productData
. After successfully running this command, a new class will be created in the app/Console/Commands
directory within your project.
Open app/Console/Commands/productData
, you should have content similar to:
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
class productData extends Command
{
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'command:name';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Command description';
/**
* Create a new command instance.
*
* @return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* @return mixed
*/
public function handle()
{
//
}
}
Now proceed to create the actual command by editing the file we just created:
<?php
namespace App\Console\Commands;
use App\Product;
use Illuminate\Console\Command;
class productData extends Command
{
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'add:product';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Add products data to the database';
public function __construct()
{
...
}
public function handle()
{
//
}
}
Here, we have changed the name and signature of the command and also added the command description. This will be used when displaying the command on the list screen.
We are close, but unfortunately, our newly created command will have no effect yet; as it does not exist, as far as Laravel is concerned. To change this, we will need to register the command by navigating to app/Console/kernel.php
file and place the Command class we just created in the protected $commands
array.
<?php
namespace App\Console;
use Illuminate\Console\Scheduling\Schedule;
use Illuminate\Foundation\Console\Kernel as ConsoleKernel;
class Kernel extends ConsoleKernel
{
/**
* The Artisan commands provided by your application.
*
* @var array
*/
protected $commands = [
Commands\productData::class,
];
protected function schedule(Schedule $schedule)
{
...
}
protected function commands()
{
...
}
}
To check if the command has been registered, run the artisan
command:
- php artisan list
And that’s it, our command signature and description have been successfully registered as indicated above.
Congratulations! You just created your first custom Artisan command!
In order to give our command life, we are going to create a model and migration file for Product
, and then complete a function that will execute the console command we created.
Generate a model and migration file with this command:
- php artisan make:model Product -m
This will generate two separate files app/Product
and database/migrations/create_products_table
. Add the contents below respectively:
...
class Product extends Model
{
protected $table = "products";
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = [
'name', 'description', 'quantity'
];
}
And:
<?php
...
class CreateProductsTable extends Migration
{
public function up()
{
Schema::create('products', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->string('description');
$table->string('quantity');
$table->timestamps();
});
}
public function down()
{
...
}
}
Open the .env
file and add your database details
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=your-database-name
DB_USERNAME=your-database-username
DB_PASSWORD=your-database-password
Now run to create tables
- php artisan migrate
Once you execute the newly created command, the handle
method within productData
class will be called. So let’s edit that and place the required command logic in this method.
<?php
...
class productData extends Command
{
protected $signature = 'add:product';
protected $description = 'Add products data to the database';
public function __construct()
{
...
}
/**
* Execute the console command.
*
* @return mixed
*/
public function handle()
{
$CSVFile = public_path('products.csv');
if(!file_exists($CSVFile) || !is_readable($CSVFile))
return false;
$header = null;
$data = array();
if (($handle = fopen($CSVFile,'r')) !== false){
while (($row = fgetcsv($handle, 1000, ',')) !==false){
if (!$header)
$header = $row;
else
$data[] = array_combine($header, $row);
}
fclose($handle);
}
$dataCount = count($data);
for ($i = 0; $i < $dataCount; $i ++){
Product::firstOrCreate($data[$i]);
}
echo "Products data added successfully"."\n";
}
}
You can find the sample CSV file used above here.
It’s time to see our custom command at work.
Run
- php artisan add:product
You should get a response stating Product data added successfully
after running that command.
A quick look at what we now have in the database.
There is a lot more you can effortlessly achieve by creating a custom Artisan command. You can build on this and create more awesome commands to make development very easy for you when using Laravel.
As shown in this tutorial, you can also read data from a CSV file and store it into your database with just a single command line. I hope this tutorial has been very helpful. If you have any questions or thoughts that require clarifications, kindly drop a comment.
]]>Today we’ll be looking at ways to spice up when people land on our sites. I just experimented with this a little bit on a personal project, CODE Hearted. You can see the logo and content are animated on page load so that it gives the page a little more life.
Not only can we use this technique to add a little pizazz, but we can also use it for UI/UX purposes to guide the user’s eyes across our page. Let’s say we only want a tagline to show up that tells a user what our site is about, then 3 seconds later, the content shows up. The combinations for this are unlimited and definitely play around with these animations and let your imagination go crazy. I went a little overboard I think on some animations (we probably wouldn’t want everything on a page to move), but it was more for demonstration purposes.
The main way to build this technique is using CSS3’s animation
feature and the animation-delay
.
This demo won’t work in all browsers since it is purely CSS3.
Let’s start by setting up our site. We are going to use the super awesome Animate.css by Dan Eden for our animations. Also Twitter Bootstrap. We can write up our own but we’ll use this to make it quick and easy.
css
style.css
index.html
We’ll set up the HTML needed to set up our page. We’ll load bootstrap
and animate.css
from a CDN.
To use animate.css
, we add a class of animated
and the type of animation we want to use. These are found here on the cool demo page. After this, all the animations will run at the same time. We add specific classes to each to vary the animation time and that’s what gives us our varied effect.
<!doctype html>
<html>
<head>
<title>CSS3 Page Loading Animations</title>
<link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"><!-- load bootstrap -->
<link rel="stylesheet" href="http://cdn.jsdelivr.net/animatecss/2.1.0/animate.min.css"><!-- load animate -->
<link rel="stylesheet" href="css/style.css">
</head>
<body>
<div class="container">
<div id="header" class="row text-center">
<div id="logo">
<span id="danger">
<span class="dd animated bounceInDown">d</span>
<span class="da animated bounceInDown">a</span>
<span class="dn animated bounceInDown">n</span>
<span class="dg animated bounceInDown">g</span>
<span class="de animated bounceInDown">e</span>
<span class="dr animated bounceInDown">r</span>
</span>
<span id="zone">
<span class="zz animated bounceInDown">z</span>
<span class="zo animated bounceInDown">o</span>
<span class="zn animated bounceInDown">n</span>
<span class="ze animated bounceInDown">e</span>
</span>
</div>
<nav id="main-nav">
<ul class="list-unstyled list-inline">
<li><a id="demo-1" class="animated btn btn-danger" href="index.html">Demo 1</a></li>
<li><a id="demo-2" class="animated btn btn-danger" href="two.html">Demo 2</a></li>
<li><a id="demo-3" class="animated btn btn-danger" href="three.html">Demo 3</a></li>
</ul>
</nav>
</div>
<div id="main" class="row">
<div id="sidebar" class="col-sm-4">
<nav id="sidebar-nav">
<ul>
<li><a id="side-home" class="animated bounceInLeft" href="#">Home</a>
<li><a id="side-about" class="animated bounceInLeft" href="#">About</a>
<li><a id="side-work" class="animated bounceInLeft" href="#">Work</a>
<li><a id="side-contact" class="animated bounceInLeft" href="#">Contact</a>
</ul>
</nav>
</div>
<div id="content" class="animated bounceInUp col-sm-8 text-center">
<div class="row">
<div class="col-sm-4">
<img class="img-responsive animated bounceInUp" src="http://lorempixel.com/500/500/people">
</div>
<div class="col-sm-4">
<img class="img-responsive animated bounceInUp" src="http://lorempixel.com/500/500/nature">
</div>
<div class="col-sm-4">
<img class="img-responsive animated bounceInUp" src="http://lorempixel.com/500/500">
</div>
</div>
</div>
</div>
</div>
</body>
</html>
Now we have a solid foundation for our site. Using animate, all of those things that are animated will now move when you view your site. Let’s add some more styling and then we’ll get to the animations.
/* BASE
============================================================================= */
@import url(http://fonts.googleapis.com/css?family=Offside);
html { overflow-y:scroll; }
body { margin-top:40px; }
/* HEADER
============================================================================= */
#header { margin-bottom:50px; }
/* logo */
#logo { color:#FFF; font-family:'Offside'; font-size:80px; margin-bottom:50px; margin-top:50px; }
#logo span { display:inline-block; }
/* MAIN NAV
============================================================================= */
#main-nav { margin-bottom:30px; }
/* SIDEBAR
============================================================================= */
#sidebar { }
#sidebar-nav { }
#sidebar-nav ul { list-style:none; padding-left:0; }
#sidebar-nav li { }
#sidebar-nav a { background:#428bca; color:#FFF; display:block; margin-bottom:10px; padding:20px; text-transform:uppercase;
border-radius:2px; -moz-border-radius:2px; -webkit-border-radius:2px;
}
#sidebar-nav a:hover { background:#3276b1; text-decoration:none; }
/* CONTENT
============================================================================= */
#content { background:#FFF; min-height:400px; padding:20px;
border-radius:2px; -moz-border-radius:2px; -webkit-border-radius:2px;
}
#content img { border-radius:2px; -moz-border-radius:2px; -webkit-border-radius:2px; }
/* ANIMATIONS
============================================================================= */
/* logo */
.dd { animation-delay:0.2s; -moz-animation-delay:0.2s; -webkit-animation-delay:0.2s; }
.da { animation-delay:0.8s; -moz-animation-delay:0.8s; -webkit-animation-delay:0.8s; }
.dn { animation-delay:0.6s; -moz-animation-delay:0.6s; -webkit-animation-delay:0.6s; }
.dg { animation-delay:1s; -moz-animation-delay:1s; -webkit-animation-delay:1s; }
.de { animation-delay:0.4s; -moz-animation-delay:0.4s; -webkit-animation-delay:0.4s; }
.dr { animation-delay:1.2s; -moz-animation-delay:1.2s; -webkit-animation-delay:1.2s; }
.zz { animation-delay:1.4s; -moz-animation-delay:1.4s; -webkit-animation-delay:1.4s; }
.zo { animation-delay:0.4s; -moz-animation-delay:0.4s; -webkit-animation-delay:0.4s; }
.zn { animation-delay:0.6s; -moz-animation-delay:0.6s; -webkit-animation-delay:0.6s; }
.ze { animation-delay:0.5s; -moz-animation-delay:0.5s; -webkit-animation-delay:0.5s; }
/* sidebar */
#side-home { animation-delay:0.2s; -moz-animation-delay:0.2s; -webkit-animation-delay:0.2s; }
#side-about { animation-delay:0.6s; -moz-animation-delay:0.6s; -webkit-animation-delay:0.6s; }
#side-work { animation-delay:0.8s; -moz-animation-delay:0.8s; -webkit-animation-delay:0.8s; }
#side-contact { animation-delay:0.3s; -moz-animation-delay:0.3s; -webkit-animation-delay:0.3s; }
/* content */
#content { animation-delay:1.5s; -moz-animation-delay:1.5s; -webkit-animation-delay:1.5s; }
#content img { animation-delay:1.7s; -moz-animation-delay:1.7s; -webkit-animation-delay:1.7s; }
And that’s it! By adding the variable animation-delays
times you can create some pretty sweet animations.
Sublime Text is an incredibly powerful editor. Not only does it have a great amount of features, it can also look good. We’ve gone through and looked at the best themes of 2014; let’s have a look at the newest Sublime Text 3 themes.
Note: Update (November 05, 2016): Add this new theme at the bottom of the article. https://github.com/dempfi/ayu
To install themes, just use package control. So the process would be:
CTRL+SHIFT+P
or CMD+SHIFT+P
Package Control: Install Package
theme
{
"theme": "Lanzhou.sublime-theme"
}
The font used in the screenshots is called Operator Mono.
{
"color_scheme": "Packages/Boxy Theme/schemes/Boxy Yesterday.tmTheme",
"theme": "Boxy Yesterday.sublime-theme",
}
{
"color_scheme": "Packages/Boxy Theme/schemes/Boxy Tomorrow.tmTheme",
"theme": "Boxy Tomorrow.sublime-theme",
}
{
"color_scheme": "Packages/Boxy Theme/schemes/Boxy Ocean.tmTheme",
"theme": "Boxy Ocean.sublime-theme",
}
{
"color_scheme": "Packages/Boxy Theme/schemes/Boxy Monokai.tmTheme",
"theme": "Boxy Monokai.sublime-theme",
}
See Material Theme on Package Control
{
"theme": "Material-Theme.sublime-theme",
"color_scheme": "Packages/Material Theme/schemes/Material-Theme.tmTheme"
}
{
"theme": "Material-Theme-Darker.sublime-theme",
"color_scheme": "Packages/Material Theme/schemes/Material-Theme-Darker.tmTheme"
}
{
"theme": "Material-Theme-Lighter.sublime-theme",
"color_scheme": "Packages/Material Theme/schemes/Material-Theme-Lighter.tmTheme"
}
{
"theme": "Agila.sublime-theme",
"color_scheme": "Packages/Agila Theme/Agila Oceanic Next.tmTheme"
}
{
"theme": "Agila Classic.sublime-theme",
"color_scheme": "Packages/Agila Theme/Agila Classic Oceanic Next.tmTheme"
}
{
"theme": "Agila Light.sublime-theme",
"color_scheme": "Packages/Agila Theme/Agila Light Solarized.tmTheme"
}
Base16 Ocean Dark
.See Lanszhou on Package Control
{
"theme": "Lanzhou.sublime-theme",
"color_scheme": "Packages/Theme - Lanzhou/base16-ocean.dark.tmTheme"
}
See Sunrise on Package Control
{
"theme": "Sunrise.sublime-theme"
}
{
"theme": "Theme - Kronuz.sublime-theme",
"color_scheme": "Packages/Theme - Kronuz/Kronuz.tmTheme"
}
{
"theme": "Autumn.sublime-theme",
"color_scheme": "Packages/Theme - Autumn/Autumn.tmTheme"
}
{
"theme": "ayu-light.sublime-theme",
"color_scheme": "Packages/ayu/ayu-light.tmTheme",
}
{
"theme": "ayu-mirage.sublime-theme",
"color_scheme": "Packages/ayu/ayu-mirage.tmTheme",
}
{
"theme": "ayu-dark.sublime-theme",
"color_scheme": "Packages/ayu/ayu-dark.tmTheme",
}
By far my favorite theme and color scheme is Boxy Monokai
, but Agila
came in a close second. However I found these theme’s color schemes to have varied syntax highlighting for files I work in like ES6, ReactJS/JSX, or HTML.
Know of any new themes or color schemes that you really liked?
]]>Note: This article is part of our Easy Node Authentication series.
This will be the final article in our Easy Node Authentication Series. We will be using all of the previous articles together.
Note: Edit 11/18/2017: Updated to reflect Facebook API changes.
This article will combine all the different Node Passport Strategies so that a user will be able to have one account and link all their social networks together.
There are many changes that need to take place from the previous articles to accomplish this. Here are the main cases we have to account for when moving from authenticating with only 1 account versus multiple accounts.
We’ll be going through each of these scenarios and updating our previous code to account for them.
We’ll be working with the Local Strategy and the Facebook Strategy to demonstrate linking accounts. The tactics used for the Facebook Strategy will carry over to Twitter and Google.
In order to add linking accounts to our application, we will need to:
When looking at the way we set up our user model, we deliberately set up all the user accounts to be set up within their own object. This ensures that we can link and unlink different accounts as our user sees fit. Notice that the social accounts will use token
and id
while our local account will use email
and password
.
...
var userSchema = mongoose.Schema({
local : {
email : String,
password : String,
},
facebook : {
id : String,
token : String,
email : String,
name : String
},
twitter : {
id : String,
token : String,
displayName : String,
username : String
},
google : {
id : String,
token : String,
email : String,
name : String
}
});
...
We have also added in email
, name
, displayName
, and username
for some accounts just to show that we can pull that information from the respective social connection.
Once a user has linked all their accounts together, they will have one user account in our database, with all of these fields full.
When we originally made these Strategies, we would use passport.authenticate
. This is what we should be using upon first authentication of our user. But what do we do if they are already logged in? They will be logged in and their user stored in session when we want to link them to their current account.
Luckily, Passport provides a way to “connect” a user’s account. They provide passport.authorize
for users that are already authenticated. To read more on the usage, visit the Passport authorize docs.
We will update our routes to handle the authorization first, and then we’ll update our Passport Strategies to handle the authorization.
Let’s create our routes first so that we can see how we link everything together. In the past articles, we created our routes for authentication. Let’s create a second set of routes for authorization. Once we’ve done that, we’ll change our Strategy to accommodate the new scenarios.
Our old routes will be commented to make a cleaner file.
module.exports = function(app, passport) {
// normal routes ===============================================================
// show the home page (will also have our login links)
// PROFILE SECTION =========================
// LOGOUT ==============================
// =============================================================================
// AUTHENTICATE (FIRST LOGIN) ==================================================
// =============================================================================
// locally --------------------------------
// LOGIN ===============================
// show the login form
// process the login form
// SIGNUP =================================
// show the signup form
// process the signup form
// facebook -------------------------------
// send to facebook to do the authentication
app.get('/auth/facebook', passport.authenticate('facebook', { scope : 'email' }));
// handle the callback after facebook has authenticated the user
app.get('/auth/facebook/callback',
passport.authenticate('facebook', {
successRedirect : '/profile',
failureRedirect : '/'
}));
// twitter --------------------------------
// send to twitter to do the authentication
// handle the callback after twitter has authenticated the user
// google ---------------------------------
// send to google to do the authentication
// the callback after google has authenticated the user
// =============================================================================
// AUTHORIZE (ALREADY LOGGED IN / CONNECTING OTHER SOCIAL ACCOUNT) =============
// =============================================================================
// locally --------------------------------
app.get('/connect/local', function(req, res) {
res.render('connect-local.ejs', { message: req.flash('loginMessage') });
});
app.post('/connect/local', passport.authenticate('local-signup', {
successRedirect : '/profile', // redirect to the secure profile section
failureRedirect : '/connect/local', // redirect back to the signup page if there is an error
failureFlash : true // allow flash messages
}));
// facebook -------------------------------
// send to facebook to do the authentication
app.get('/connect/facebook', passport.authorize('facebook', {
scope : ['public_profile', 'email']
}));
// handle the callback after facebook has authorized the user
app.get('/connect/facebook/callback',
passport.authorize('facebook', {
successRedirect : '/profile',
failureRedirect : '/'
}));
// twitter --------------------------------
// send to twitter to do the authentication
app.get('/connect/twitter', passport.authorize('twitter', { scope : 'email' }));
// handle the callback after twitter has authorized the user
app.get('/connect/twitter/callback',
passport.authorize('twitter', {
successRedirect : '/profile',
failureRedirect : '/'
}));
// google ---------------------------------
// send to google to do the authentication
app.get('/connect/google', passport.authorize('google', { scope : ['profile', 'email'] }));
// the callback after google has authorized the user
app.get('/connect/google/callback',
passport.authorize('google', {
successRedirect : '/profile',
failureRedirect : '/'
}));
};
// route middleware to ensure user is logged in
function isLoggedIn(req, res, next) {
if (req.isAuthenticated())
return next();
res.redirect('/');
}
As you can see, we have all the authentication routes and the routes to show our index and profile pages. Now we have added authorization routes which will look incredibly similar to our authentication routes.
With our newly created routes, let’s update the Strategy so that our authorization routes are utilized.
We will just update the Facebook and Local Strategies to get a feel for how we can accommodate all our different scenarios.
When using the passport.authorize
route, our user that is stored in session (since they are already logged in) will be passed to the Strategy. We will make sure we change our code to account for that.
We’re going to show the old Strategy and then the new Strategy. Read the comments to get a full understanding of the changes.
...
// =========================================================================
// FACEBOOK ================================================================
// =========================================================================
passport.use(new FacebookStrategy({
// pull in our app id and secret from our auth.js file
clientID : configAuth.facebookAuth.clientID,
clientSecret : configAuth.facebookAuth.clientSecret,
callbackURL : configAuth.facebookAuth.callbackURL
},
// facebook will send back the token and profile
function(token, refreshToken, profile, done) {
// asynchronous
process.nextTick(function() {
// find the user in the database based on their facebook id
User.findOne({ 'facebook.id' : profile.id }, function(err, user) {
// if there is an error, stop everything and return that
// ie an error connecting to the database
if (err)
return done(err);
// if the user is found, then log them in
if (user) {
return done(null, user); // user found, return that user
} else {
// if there is no user found with that facebook id, create them
var newUser = new User();
// set all of the facebook information in our user model
newUser.facebook.id = profile.id; // set the users facebook id
newUser.facebook.token = token; // we will save the token that facebook provides to the user
newUser.facebook.name = profile.name.givenName + ' ' + profile.name.familyName; // look at the passport user profile to see how names are returned
newUser.facebook.email = profile.emails[0].value; // facebook can return multiple emails so we'll take the first
// save our user to the database
newUser.save(function(err) {
if (err)
throw err;
// if successful, return the new user
return done(null, newUser);
});
}
});
});
}));
...
Now we want the ability to authorize a user.
...
// =========================================================================
// FACEBOOK ================================================================
// =========================================================================
passport.use(new FacebookStrategy({
// pull in our app id and secret from our auth.js file
clientID : configAuth.facebookAuth.clientID,
clientSecret : configAuth.facebookAuth.clientSecret,
callbackURL : configAuth.facebookAuth.callbackURL,
passReqToCallback : true // allows us to pass in the req from our route (lets us check if a user is logged in or not)
},
// facebook will send back the token and profile
function(req, token, refreshToken, profile, done) {
// asynchronous
process.nextTick(function() {
// check if the user is already logged in
if (!req.user) {
// find the user in the database based on their facebook id
User.findOne({ 'facebook.id' : profile.id }, function(err, user) {
// if there is an error, stop everything and return that
// ie an error connecting to the database
if (err)
return done(err);
// if the user is found, then log them in
if (user) {
return done(null, user); // user found, return that user
} else {
// if there is no user found with that facebook id, create them
var newUser = new User();
// set all of the facebook information in our user model
newUser.facebook.id = profile.id; // set the users facebook id
newUser.facebook.token = token; // we will save the token that facebook provides to the user
newUser.facebook.name = profile.name.givenName + ' ' + profile.name.familyName; // look at the passport user profile to see how names are returned
newUser.facebook.email = profile.emails[0].value; // facebook can return multiple emails so we'll take the first
// save our user to the database
newUser.save(function(err) {
if (err)
throw err;
// if successful, return the new user
return done(null, newUser);
});
}
});
} else {
// user already exists and is logged in, we have to link accounts
var user = req.user; // pull the user out of the session
// update the current users facebook credentials
user.facebook.id = profile.id;
user.facebook.token = token;
user.facebook.name = profile.name.givenName + ' ' + profile.name.familyName;
user.facebook.email = profile.emails[0].value;
// save the user
user.save(function(err) {
if (err)
throw err;
return done(null, user);
});
}
});
}));
...
Now we have accounted for linking an account if a user is already logged in. We still have the same functionality from before, now we just check if the user is logged in before we take action.
Using this new code in our Strategy, we will create a new user if they are not already logged in, or we will add our Facebook credentials to our user if they are currently logged in and stored in session.
Other Strategies: The code for the Facebook Strategy will be the same for Twitter and Google. Just apply that code to both of those to get this working. We will also provide the full code so you can look at and reference it.
Now that we have the routes that will pass our user to our new Facebook Strategy, let’s make sure our UI lets our user use the newly created routes.
We will update our index.ejs
and our profile.ejs
to show all the login buttons on the home page, and all the accounts and link buttons on the profile page. Here is the full code for both with the important parts highlighted.
<!doctype html>
<html>
<head>
<title>Node Authentication</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.2/css/bootstrap.min.css">
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css">
<style>
body { padding-top:80px; }
</style>
</head>
<body>
<div class="container">
<div class="jumbotron text-center">
<h1><span class="fa fa-lock"></span> Node Authentication</h1>
<p>Login or Register with:</p>
<a href="/login" class="btn btn-default"><span class="fa fa-user"></span> Local Login</a>
<a href="/signup" class="btn btn-default"><span class="fa fa-user"></span> Local Signup</a>
<a href="/auth/facebook" class="btn btn-primary"><span class="fa fa-facebook"></span> Facebook</a>
<a href="/auth/twitter" class="btn btn-info"><span class="fa fa-twitter"></span> Twitter</a>
<a href="/auth/google" class="btn btn-danger"><span class="fa fa-google-plus"></span> Google+</a>
</div>
</div>
</body>
</html>
<!doctype html>
<html>
<head>
<title>Node Authentication</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.2/css/bootstrap.min.css">
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css">
<style>
body { padding-top:80px; word-wrap:break-word; }
</style>
</head>
<body>
<div class="container">
<div class="page-header text-center">
<h1><span class="fa fa-anchor"></span> Profile Page</h1>
<a href="/logout" class="btn btn-default btn-sm">Logout</a>
</div>
<div class="row">
<!-- LOCAL INFORMATION -->
<div class="col-sm-6">
<div class="well">
<h3><span class="fa fa-user"></span> Local</h3>
<% if (user.local.email) { %>
<p>
<strong>id</strong>: <%= user._id %><br>
<strong>email</strong>: <%= user.local.email %><br>
<strong>password</strong>: <%= user.local.password %>
</p>
<a href="/unlink/local" class="btn btn-default">Unlink</a>
<% } else { %>
<a href="/connect/local" class="btn btn-default">Connect Local</a>
<% } %>
</div>
</div>
<!-- FACEBOOK INFORMATION -->
<div class="col-sm-6">
<div class="well">
<h3 class="text-primary"><span class="fa fa-facebook"></span> Facebook</h3>
<!-- check if the user has this token (is the user authenticated with this social account) -->
<% if (user.facebook.token) { %>
<p>
<strong>id</strong>: <%= user.facebook.id %><br>
<strong>token</strong>: <%= user.facebook.token %><br>
<strong>email</strong>: <%= user.facebook.email %><br>
<strong>name</strong>: <%= user.facebook.name %><br>
</p>
<a href="/unlink/facebook" class="btn btn-primary">Unlink</a>
<% } else { %>
<a href="/connect/facebook" class="btn btn-primary">Connect Facebook</a>
<% } %>
</div>
</div>
</div>
<div class="row">
<!-- TWITTER INFORMATION -->
<div class="col-sm-6">
<div class="well">
<h3 class="text-info"><span class="fa fa-twitter"></span> Twitter</h3>
<!-- check if the user has this token (is the user authenticated with this social account) -->
<% if (user.twitter.token) { %>
<p>
<strong>id</strong>: <%= user.twitter.id %><br>
<strong>token</strong>: <%= user.twitter.token %><br>
<strong>display name</strong>: <%= user.twitter.displayName %><br>
<strong>username</strong>: <%= user.twitter.username %>
</p>
<a href="/unlink/twitter" class="btn btn-info">Unlink</a>
<% } else { %>
<a href="/connect/twitter" class="btn btn-info">Connect Twitter</a>
<% } %>
</div>
</div>
<!-- GOOGLE INFORMATION -->
<div class="col-sm-6">
<div class="well">
<h3 class="text-danger"><span class="fa fa-google-plus"></span> Google+</h3>
<!-- check if the user has this token (is the user authenticated with this social account) -->
<% if (user.google.token) { %>
<p>
<strong>id</strong>: <%= user.google.id %><br>
<strong>token</strong>: <%= user.google.token %><br>
<strong>email</strong>: <%= user.google.email %><br>
<strong>name</strong>: <%= user.google.name %>
</p>
<a href="/unlink/google" class="btn btn-danger">Unlink</a>
<% } else { %>
<a href="/connect/google" class="btn btn-danger">Connect Google</a>
<% } %>
</div>
</div>
</div>
</div>
</body>
</html>
Now we will have the links to each of our login methods. Then after they have logged in with one, the profile will check which accounts are already linked and which are not.
If an account is not yet linked, it will show the Connect Button. If an account is already linked, then our view will show the account information and the unlink button.
Remember that our user is passed to our profile view from the routes.js
file.
Our social accounts can easily be configured this way. The only problem currently is if a user wanted to connect to a local account. The problem comes in because they will need to see a signup page to add their email and password.
We have already created a route to handle showing our new connection form (in our routes.js
file: (app.get('connect/local'))
). All we need to do is create the view that the route brings up.
Create a file in your views folder: views/connect-local.ejs
.
<!doctype html>
<html>
<head>
<title>Node Authentication</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.2/css/bootstrap.min.css">
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css">
<style>
body { padding-top:80px; }
</style>
</head>
<body>
<div class="container">
<div class="col-sm-6 col-sm-offset-3">
<h1><span class="fa fa-user"></span> Add Local Account</h1>
<% if (message.length > 0) { %>
<div class="alert alert-danger"><%= message %></div>
<% } %>
<!-- LOGIN FORM -->
<form action="/connect/local" method="post">
<div class="form-group">
<label>Email</label>
<input type="text" class="form-control" name="email">
</div>
<div class="form-group">
<label>Password</label>
<input type="password" class="form-control" name="password">
</div>
<button type="submit" class="btn btn-warning btn-lg">Add Local</button>
</form>
<hr>
<p><a href="/profile">Go back to profile</a></p>
</div>
</div>
</body>
</html>
This will look incredibly similar to our signup.ejs
form. That’s because it really is. We pretty much just changed out the verbiage and the action
URL for the form.
Now when someone tries to connect a local account, they will be directed to this form, and then when submitted, they will be directed to our Local Strategy. That links the accounts!
With just those routes and the update to our Passport Strategies, our application can now link accounts together! Take a look at a user in our database that has all their accounts linked using Robomongo:
Linking accounts was easy. What about unlinking? Let’s say a user no longer wants their Facebook account linked.
For our purposes, when a user wants to unlink an account, we will remove their token
only. We will keep their id
in the database just in case they realize their mistake of leaving and want to come back to join our application.
We can do this all in our routes file. You are welcome to create a controller and do all this logic there. Then you would just call the controller from the routes. For simplicity’s sake, we’ll throw that code directly into our routes.
Let’s add our unlinking routes after our newly created authorization routes.
...
// normal routes
// authentication routes
// authorization routes
// =============================================================================
// UNLINK ACCOUNTS =============================================================
// =============================================================================
// used to unlink accounts. for social accounts, just remove the token
// for local account, remove email and password
// user account will stay active in case they want to reconnect in the future
// local -----------------------------------
app.get('/unlink/local', function(req, res) {
var user = req.user;
user.local.email = undefined;
user.local.password = undefined;
user.save(function(err) {
res.redirect('/profile');
});
});
// facebook -------------------------------
app.get('/unlink/facebook', function(req, res) {
var user = req.user;
user.facebook.token = undefined;
user.save(function(err) {
res.redirect('/profile');
});
});
// twitter --------------------------------
app.get('/unlink/twitter', function(req, res) {
var user = req.user;
user.twitter.token = undefined;
user.save(function(err) {
res.redirect('/profile');
});
});
// google ---------------------------------
app.get('/unlink/google', function(req, res) {
var user = req.user;
user.google.token = undefined;
user.save(function(err) {
res.redirect('/profile');
});
});
...
In these routes, we just pull a user’s information out of the request (session) and then remove the correct information. Since we already had created our links to these routes in profile.ejs
, they will now work since we have created the routes finally.
Now you can link an account and unlink an account.
When trying to unlink, we will have to do a little more configuration for that to work. Since the id
is already stored in the database, we will have to plan for that scenario when a user links an account that was already previously linked.
After a user is unlinked, their id
still lives in the database. Therefore, when a user logs in or relinks an account, we have to check if their id
exists in the database.
We will handle this in our Strategy. Let’s add to our Facebook Strategy.
...
// =========================================================================
// FACEBOOK ================================================================
// =========================================================================
passport.use(new FacebookStrategy({
// pull in our app id and secret from our auth.js file
clientID : configAuth.facebookAuth.clientID,
clientSecret : configAuth.facebookAuth.clientSecret,
callbackURL : configAuth.facebookAuth.callbackURL,
passReqToCallback : true // allows us to pass in the req from our route (lets us check if a user is logged in or not)
},
// facebook will send back the token and profile
function(req, token, refreshToken, profile, done) {
// asynchronous
process.nextTick(function() {
// check if the user is already logged in
if (!req.user) {
// find the user in the database based on their facebook id
User.findOne({ 'facebook.id' : profile.id }, function(err, user) {
// if there is an error, stop everything and return that
// ie an error connecting to the database
if (err)
return done(err);
// if the user is found, then log them in
if (user) {
// if there is a user id already but no token (user was linked at one point and then removed)
// just add our token and profile information
if (!user.facebook.token) {
user.facebook.token = token;
user.facebook.name = profile.name.givenName + ' ' + profile.name.familyName;
user.facebook.email = profile.emails[0].value;
user.save(function(err) {
if (err)
throw err;
return done(null, user);
});
}
return done(null, user); // user found, return that user
} else {
// if there is no user found with that facebook id, create them
var newUser = new User();
// set all of the facebook information in our user model
newUser.facebook.id = profile.id; // set the users facebook id
newUser.facebook.token = token; // we will save the token that facebook provides to the user
newUser.facebook.name = profile.name.givenName + ' ' + profile.name.familyName; // look at the passport user profile to see how names are returned
newUser.facebook.email = profile.emails[0].value; // facebook can return multiple emails so we'll take the first
// save our user to the database
newUser.save(function(err) {
if (err)
throw err;
// if successful, return the new user
return done(null, newUser);
});
}
});
} else {
// user already exists and is logged in, we have to link accounts
var user = req.user; // pull the user out of the session
// update the current users facebook credentials
user.facebook.id = profile.id;
user.facebook.token = token;
user.facebook.name = profile.name.givenName + ' ' + profile.name.familyName;
user.facebook.email = profile.emails[0].value;
// save the user
user.save(function(err) {
if (err)
throw err;
return done(null, user);
});
}
});
}));
...
Now just add that same code across the board to all of our Strategies and we have an application that can register a user, link accounts, unlink accounts, and relink accounts.
For those interested in seeing the entire code altogether, make sure you check out the GitHub repo. Also, here are direct links to the two most important files:
Hopefully, we covered most of the cases that you’ll run into when authenticating and authorizing users. Make sure to take a look at the full code and the demo to make sure that everything is working properly. If you see anything that raises questions, just let me know and be sure to go look at the full code for clarification!
Thanks for sticking with us throughout this entire series. We hope you enjoyed it. We’ll be expanding on authentication further in the future by doing a Node and Angular authentication tutorial. Until then, happy authenticating!
]]>AngularJS is an excellent framework for building websites and apps. Built-in routing, data-binding, and directives among other features enable AngularJS to completely handle the front-end of any type of application.
The one pitfall to using AngularJS (for now) is Search Engine Optimization (SEO). In this tutorial, we will go over how to make your AngularJS website or application crawlable by Google.
Search engines crawlers (or bots) were originally designed to crawl the HTML content of web pages. As the web evolved, so did the technologies powering websites and JavaScript became the de facto language of the web. AJAX allowed for asynchronous operations on the web. AngularJS fully embraces the asynchronous model and this is what creates problems for Google’s crawlers.
If you are fully utilizing AngularJS, there is a strong possibility that you will only have one real HTML page that will be fed HTML partial views asynchronously. All the routing and application logic is done on the client-side, so whether you’re changing pages, posting comments, or performing other CRUD operations, you are doing it all from one page.
Rest assured, Google does have a way of indexing AJAX applications, and your AngularJS app can be crawled, indexed, and will appear in search results just like any other website. There are a few caveats and extra steps that you will need to perform, but these methods are fully supported by Google. To read more about Google’s guidelines for crawlable AJAX content visit Google’s Webmaster AJAX Crawling Guidelines.
Our application will be able to be rendered by Google bot and all his friends (Bing bot). This way, we won’t run into the problem shown in the picture above. We’ll get nice search results as our users expect from us.
<meta name="fragment" content="!">
it will add an ?_escaped_fragment_=
tag to your URL.Alternatives:
Prerender.io is a service that is compatible across a variety of different platforms including Node, PHP, and Ruby. The service is fully open-source but they do offer a hosted solution if you do not want to go through the hassle of setting up your own server for SEO. The folks over at Prerender believe that SEO is a right, not a privilege and they have done some great work extending their solution, adding a lot of customizable features and plugins.
We will be building a simple Node/AngularJS application that has multiple pages with dynamic content flowing throughout. We will use Node.js as our backend server with Express. Check out the Node package.json
file below to see all of our dependencies for this tutorial. Once you are ready, sign up for a free prerender.io account and get your token.
// package.json
{
"name": "Angular-SEO-Prerender",
"description": "...",
"version": "0.0.1",
"private": "true",
"dependencies": {
"express": "latest",
"prerender-node": "latest"
}
}
Now that we have our package.json
ready to go, let’s install our Node dependencies using npm install
.
The setup here is pretty standard. In our server.js
file we will require the Prerender service and connect to it using our prerender token.
// server.js
var express = require('express');
var app = module.exports = express();
app.configure(function(){
// Here we require the prerender middleware that will handle requests from Search Engine crawlers
// We set the token only if we're using the Prerender.io service
app.use(require('prerender-node').set('prerenderToken', 'YOUR-TOKEN-HERE'));
app.use(express.static("public")); app.use(app.router);
});
// This will ensure that all routing is handed over to AngularJS
app.get('*', function(req, res){
res.sendfile('./public/index.html');
});
app.listen(8081);
console.log("Go Prerender Go!");
The main page is also pretty standard. Write your code like you normally would. The big change here will simply be adding <meta name="fragment" content="!">
to the <head>
of your page. This meta tag will tell search engine crawlers that this is a website that has dynamic JavaScript content that needs to be crawled.
Additionally, if your page is not caching properly or it’s missing content you can add the following script snippet: window.prerenderReady = false;
which will tell the Prerender service to wait until your entire page is fully rendered before taking a snapshot. You will need to set window.prerenderReady = true
once you’re sure your content has completed loading. There is a high probability that you will not need to include this snippet, but the option is there if you need it.
That’s it! Please see the code below for additional comments.
<!-- index.html -->
<!doctype html> <!-- We will create a mainController and bind it to HTML which will give us access to the entire DOM -->
<html ng-app="prerender-tutorial" ng-controller="mainController"> <head>
<meta name="fragment" content="!">
<!-- We define the SEO variables we want to dynamically update -->
<title>Scotch Tutorial | {{ seo.pageTitle }}</title>
<meta name="description" content="{{ seo.metaDescription }}">
<!-- CSS-->
<link rel="stylesheet" type="text/css" href="/assets/bootstrap.min.css">
<style>
body { margin-top:60px; }
</style>
<!-- JS -->
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.10/angular.min.js"></script>
<script src="http://code.angularjs.org/1.2.10/angular-route.min.js"></script>
<script src="/app.js"></script>
</head>
<body>
<div class="container">
<!-- NAVIGATION BAR -->
<div class="bs-example bs-navbar-top-example">
<nav class="navbar navbar-default navbar-fixed-top">
<div class="navbar-header">
<a class="navbar-brand" href="/">Angular SEO Prerender Tutorial</a>
</div>
<ul class="nav navbar-nav">
<li><a href="/">Home</a></li>
<li><a href="/about">About</a></li>
<li><a href="/features">Features</a></li>
</ul>
</nav>
</div>
<h1 class="text-center">Welcome to the Angular SEO Prerender Tutorial</h1>
<!-- where we will inject our template data -->
<div ng-view></div>
</div>
</body>
</html>
In our app.js
, the page where we define our AngularJS code, we will need to add this code to our routes config: $locationProvider.hashPrefix('!');
. This method will change the way your URLs are written.
If you are using html5Mode you won’t see any difference, otherwise, your URLs will look like http://localhost:3000/#!/home
compared to the standard http://localhost:3000/#/home
.
This #!
in your URL is very important, as it is what will alert crawlers that your app has AJAX content and that it should do its AJAX crawling magic.
// app.js
var app = angular.module('prerender-tutorial', ['ngRoute'])
.config(function($routeProvider, $locationProvider){
$routeProvider.when('/', {
templateUrl : 'views/homeView.html',
controller: 'homeController'
})
$routeProvider.when('/about', {
templateUrl : '/views/aboutView.html',
controller: 'aboutController'
})
$routeProvider.when('/features', {
templateUrl : '/views/featuresView.html',
controller : 'featuresController'
})
$routeProvider.otherwise({
redirectTo : '/'
});
$locationProvider.html5Mode(true);
$locationProvider.hashPrefix('!');
});
function mainController($scope) {
// We will create an seo variable on the scope and decide which fields we want to populate
$scope.seo = {
pageTitle : '', pageDescription : ''
};
}
function homeController($scope) {
// For this tutorial, we will simply access the $scope.seo variable from the main controller and fill it with content.
// Additionally you can create a service to update the SEO variables - but that's for another tutorial.
$scope.$parent.seo = {
pageTitle : 'AngularJS SEO Tutorial',
pageDescripton: 'Welcome to our tutorial on getting your AngularJS websites and apps indexed by Google.'
};
}
function aboutController($scope) {
$scope.$parent.seo = { pageTitle : 'About',
pageDescripton: 'We are a content heavy website so we need to be indexed.'
};
}
function featuresController($scope) {
$scope.$parent.seo = { pageTitle : 'Features', pageDescripton: 'Check out some of our awesome features!' };
}
In the above code, you can see how we handle Angular routing and our different pageTitle
and pageDescription
for the pages. These will be rendered to crawlers for an SEO-ready page!
When a crawler visits your page at http://localhost:3000/#!/home
, the URL will be converted to http://localhost:3000/?escaped_fragment=/home
, once the Prerender middleware sees this type of URL, it will make a call to the Prerender service. Alternatively, if you are using HTML5mode, when a crawler visits your page at http://localhost:3000/home
, the URL will be converted to http://localhost:3000/home/?escaped_fragment=
.
The Prerender service will check and see if it has a snapshot or already rendered page for that URL, if it does, it will send it to the crawler, if it does not, it will render a snapshot on the fly and send the rendered HTML to the crawler for correct indexing.
Prerender provides a dashboard for you to see the different pages that have been rendered and crawled by bots. This is a great tool to see how your SEO pages are working.
I recently got a chance to chat with the creator of Prerender.io and asked him for some tips on getting your single-page app indexed. This is what he had to say:
#
s for your URLs, make sure to set the hashPrefix(‘!’)
so that the URLs are rewritten as #!
ssitemap.xml
and robots.txt
#!
or ?escaped_fragment=
in the right place as the manual tools do not behave exactly as the actual crawlers do.Hopefully, you won’t let the SEO drawback of Angular applications hold you back from using the great tool. There are services out there like Prerender and ways to crawl AJAX content. Make sure to look at the Google Webmaster AJAX Crawling Guidelines and have fun building your SEO-friendly Angular applications!
]]>Express 5.0 is currently in the alpha release stage and it will not be quite different from Express 4. While the underlying API remains the same as that of Express 4, you may need to watch for some deprecated methods that may break your application when you upgrade.
It should, however, be a smooth transition between Express v4 and Express v5 as we will see in the following examples.
We’ve built APIs on Express 4, so we’re very excited to see that 5 is coming shortly
To get started, set up a new Node.js project with npm init
and install the latest Express 5 alpha release from npm. You can always refer to the release logs from the Express repository’s history.md
file.
- npm install express@5.0.0-alpha.3 --save
We are now ready to see some of the changes. Most of the methods and properties that we will look at here were previously deprecated and have been completely removed in Express 5.
app.del()
app.del()
has been removed as a HTTP DELETE registration method in favor of app.delete()
.
app.del('/resource', (req, res) => res.send('deleted'));
//4: express deprecated app.del: Use app.delete instead
//5: TypeError: app.del is not a function
app.param(fn)
The app.param(fn)
signature was used for modifying the behavior of the app.param(name, fn)
function. It has been deprecated since v4.11.0, and Express 5 no longer supports it at all.
req.param(name)
You can no longer access request parameters using the param
method. Instead, use the req.params
, req.body
, or req.query
objects.
// Bad
app.get('/users/:id', (req, res) => {
res.send(User.find(req.param('id')));
});
//4: express deprecated req.param(name): Use req.params, req.body, or req.query instead
//5: TypeError: req.param is not a function
// Good
app.get('/user/:id', (req, res) => {
res.send(User.find(req.params.id));
});
The following methods have been pluralized in Express 5:
req.acceptsCharset()
is replaced by req.acceptsCharsets()
.req.acceptsEncoding()
is replaced by req.acceptsEncodings()
.req.acceptsLanguage()
is replaced by req.acceptsLanguages()
.The res.sendfile()
function has been replaced by a camel-cased version res.sendFile()
.
res.json(obj, status)
Express 5 no longer supports the signature res.json(obj, status)
. Instead, set the status and then chain it to the res.json()
method like:
res.status(status).json(obj)
res.jsonp(obj, status)
Express 5 no longer supports the signature res.jsonp(obj, status)
. Instead, set the status and then chain it to the res.jsonp()
method like:
res.status(status).jsonp(obj)
res.send(body, status)
Express 5 no longer supports the signature res.send(obj, status)
. Instead, set the status and then chain it to the res.send()
method like:
res.status(status).send(obj)
:
) in name for app.param(name, fn)
If your first experience with the micro framework was Express 4, you probably have no idea what this is all about.
A leading colon character (:
) in the name for the app.param(name, fn)
function is a remnant of Express 3, and for the sake of backward compatibility, Express 4 supported it with a deprecation notice. Express 5 will silently ignore it and use the name parameter without prefixing it with a colon.
This should not affect your code if you follow the Express 4 documentation of app.param
, as it makes no mention of the leading colon.
In Express 4, res.send(status)
where status is a number was deprecated as a way of setting the HTTP status code header in favor of res.sendStatus(statusCode)
.
app.get('/', (req, res) => {
res.send(500);
});
// 4: express deprecated res.send(status): Use res.sendStatus(status) instead
This is the expected response in Express 4 (500 Internal Server Error)
- http :8001
OutputHTTP/1.1 500 Internal Server Error
Connection: keep-alive
Content-Length: 21
Content-Type: text/plain; charset=utf-8
Date: Wed, 08 Feb 2017 12:01:48 GMT
ETag: W/"15-3JQVFLwoG6yepWGqlDPA/A"
X-Powered-By: Express
Internal Server Error
The same code would return the exact opposite in Express 5 (200 status code with ‘500’ message).
- http :8000
OutputHTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 3
Content-Type: application/json; charset=utf-8
Date: Wed, 08 Feb 2017 12:04:22 GMT
ETag: W/"3-zuYxEhwuySMvOi8CitXImw"
X-Powered-By: Express
500
Express 5 no longer supports the signature res.send(status)
, where status is a number. Instead, use the res.sendStatus(statusCode)
function, which sets the HTTP response header status code and sends the text version of the code: “Not Found”, “Internal Server Error”, and so on.
If you need to send a number by using the res.send()
function, quote the number to convert it to a string, so that Express does not interpret it as an attempt to use the unsupported old signature.
app.router
The app.router
object, which was removed in Express 4, has made a comeback in Express 5. In the new version, this object is just a reference to the base Express router, unlike in Express 3, where an app had to explicitly load it.
req.host
In Express 4, the req.host
function incorrectly stripped off the port number if it was present. In Express 5 the port number is maintained.
app.get('/', (req, res) => {
res.status(200).send(req.host);
});
// 4: localhost
// 5: localhost:8000
req.query
In Express 4.7 and Express 5 onwards, the query parser option can accept false
to disable query string parsing when you want to use your own function for query string parsing logic.
res.render()
This method now enforces asynchronous behavior for all view engines, avoiding bugs caused by view engines that had a synchronous implementation and that violated the recommended interface.
Read more about this in this Express GitHub issue.
Middleware Promises
Promises have become quite popular in asynchronous development and it’s been proposed that next()
returns a promise so that middleware can be resolved and propagated through the next method.
This is how that would look like.
app.use(function (req, res, next) {
// a promise must be returned,
// otherwise the function will be assumed to be synchronous
return User.get(req.session.userid).then(function (user) {
req.user = user
})
.then(next) // execute all downstream middleware
.then(function () {
// send a response after all downstream middleware have executed
// this is equivalent to koa's "upstream"
res.send(user)
})
})
Express 5 is still in alpha so there are bound to be changes. I will keep updating this post as the updates are rolled out. Take a look at this pull request if you want to see an updated list of changes for release.
]]>In this tutorial, we would be creating a simple registration form with just fields for fullname
, email
, and password
. We would use zxcvbn
to estimate the strength of the password in the form and also provide visual feedback. We would also use AngularJS for effortless two-way data bindings.
At the end of the tutorial, the final page will behave as shown in the following live demo:
View Password Strength on JSFiddle
Passwords are commonly used for user authentication in most web applications and as such, it is required that passwords be stored in a safe way. Over the years, techniques such as one-way password hashing - which involves salting most of the time, have been employed to hide the real representation of passwords being stored in a database.
Although password hashing is a great step in securing passwords, the user still poses a major challenge to password security. A user who uses a very common word as a password makes the effort of hashing fruitless since a bruteforce attack can crack such a password in a very short time. In fact, if highly sophisticated infrastructure is used for the attack, it may even take split milliseconds, depending on the password complexity or length.
Many web applications today such as Google, Twitter, Dropbox, etc insist on users having considerably strong passwords either by ensuring a minimum password length or some combination of alphanumeric characters and maybe symbols in the password.
How then is password strength measured? Dropbox developed an algorithm for a realistic password strength estimator inspired by password crackers. This algorithm is packaged in a Javascript library called zxcvbn
. In addition, the package contains a dictionary of commonly used English words, names, and passwords.
Before we begin the tutorial, we would download all the dependencies we need using the Bower package manager. If you don’t already have Bower in your system, you can follow the Bower Installation Guide. Run the following command to install all the dependencies for the tutorial.
- bower install zxcvbn angularjs#1.5.9 bootstrap
The root folder should contain two folders and one file.
assets
folderbower_components
folderindex.html
fileThe folder structure for our project should look like the following screenshot - you can create the folders and directories as required.
Let’s begin by adding the basic markup for our page in the index.html
file. We would link to the Bootstrap files - bootstrap.min.css
and bootstrap-theme.min.css
, and also the angular.min.js
framework from our bower_components
folder. We would also link to our project’s css
and js
files in the assets
folder. See the following code for the basic HTML markup of our page.
<!DOCTYPE html>
<html class="no-js">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Password Strength</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css">
<link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap-theme.min.css">
<link rel="stylesheet" href="assets/css/main.css">
</head>
<body>
<div class="main-container">
<div class="form-container">
<form action="" method="POST" role="form">
<legend class="form-label">Join the Team</legend>
<div class="form-group">
<label for="fullname">Fullname</label>
<input type="text" class="form-control" id="fullname"
placeholder="Enter Fullname">
</div>
<div class="form-group">
<label for="email">Email</label>
<input type="email" class="form-control" id="email"
placeholder="Enter Email Address">
</div>
<div class="form-group">
<label for="password">Password</label>
<div class="form-hint">To conform with our Strong Password policy, you are
required to use a sufficiently strong password. Password must be more than
7 characters.</div>
<input type="password" class="form-control" id="password"
placeholder="Enter Password">
</div>
<button type="submit" class="btn btn-primary">Submit</button>
</form>
</div>
</div>
<script src="bower_components/angular/angular.min.js"></script>
<script src="assets/js/app.js"></script>
</body>
</html>
At this point, our page should look like the following screenshot:
Now we would add some CSS rules to the assets/css/main.css
file to spice up the page.
body {
margin: 0;
padding: 0;
}
.main-container {
display: table;
width: 400px;
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
margin: auto;
}
.form-container {
position: relative;
bottom: 100px;
display: table-cell;
vertical-align: middle;
}
.form-container form > div {
padding: 0 15px;
}
.form-container form > button {
margin-left: 15px;
}
legend.form-label {
font-size: 24pt;
padding: 0 15px;
}
.form-hint {
font-size: 7pt;
line-height: 9pt;
margin: -5px auto 5px;
color: #999;
}
At this point, our page should look like the following screenshot:
zxcvbn
asynchronouslyNow we would asynchronously load the zxcvbn
package into our page. We would add the script in the assets/js/app.js
file. The following script programmatically creates a new <script>
element that is inserted before the first script element defined in the page, when the page is finished loading. The src
of this script element points to the zxcvbn.js
file. The async
attribute is also set to true
to enable asynchronous loading.
(function() {
var ZXCVBN_SRC = 'bower_components/zxcvbn/dist/zxcvbn.js';
var async_load = function() {
var first, s;
// create a <script> element using the DOM API
s = document.createElement('script');
// set attributes on the script element
s.src = ZXCVBN_SRC;
s.type = 'text/javascript';
s.async = true; // HTML5 async attribute
// Get the first script element in the document
first = document.getElementsByTagName('script')[0];
// insert the <script> element before the first in the document
return first.parentNode.insertBefore(s, first);
};
// attach async_load as callback to the window load event
if (window.attachEvent != null) {
window.attachEvent('onload', async_load);
} else {
window.addEventListener('load', async_load, false);
}
}).call(this);
Now we can try using the zxcvbn()
function with any password string from our browser’s console. The zxcvbn()
function returns a result object with several properties. In this tutorial, we would be concerned only with the score
property, which is an integer from 0 - 4
(useful for implementing a strength bar).
0
- too guessable1
- very guessable2
- somewhat guessable3
- safely unguessable4
- very unguessableconsole.log(zxcvbn('password'));
See the following video on testing the zxcvbn()
method on the browser’s console.
Now we would make some little improvements to our code to engage AngularJS. First, let’s create a module for our app called PasswordStrength
, and a simple controller for our form called FormController
. We would append the following script to the assets/js/app.js
file.
// creating the app module
angular.module('PasswordStrength', []);
// adding a controller to the module
angular.module('PasswordStrength').controller('FormController', function($scope) {});
Next, we would edit our index.html
file to add an ng-app
directive for the module on the root <html>
element, and an ng-controller
directive for the controller on the <form>
element.
<!-- adding the ng-app directive -->
<html class="no-js" ng-app="PasswordStrength">
<!-- adding the ng-controller directive -->
<form action="" method="POST" role="form" ng-controller="FormController">
Now we can add our validation logic to the form.
ng-model
directives to the input fields to be able to harness the built-in NgModelController
.ng-required
constraint to the input fields, since we desire they should be filled.ng-disabled
directive and only enable it when the form is fully validated.ng-class
and ng-show
directives to provide error feedback and messages for the input elements.The modified form in our index.html
file should look like this:
<form action="" method="POST" name="joinTeamForm" role="form" ng-controller="FormController">
<legend class="form-label">Join the Team</legend>
<div class="form-group">
<label for="fullname">Fullname</label>
<div class="error form-hint" ng-show="joinTeamForm.fullname.$dirty
&& joinTeamForm.fullname.$error.required" ng-cloak>{{"This field is required."}}</div>
<input type="text" class="form-control" ng-class="(joinTeamForm.fullname.$dirty &&
joinTeamForm.fullname.$invalid) ? 'error' : ''" id="fullname" name="fullname"
placeholder="Enter Fullname" ng-required="true" ng-model="fullname">
</div>
<div class="form-group">
<label for="email">Email</label>
<div class="error form-hint" ng-show="joinTeamForm.email.$dirty &&
joinTeamForm.email.$error.required" ng-cloak>{{"This field is required."}}</div>
<div class="error form-hint" ng-show="joinTeamForm.email.$dirty &&
joinTeamForm.email.$error.email" ng-cloak>{{"Email is invalid."}}</div>
<input type="email" class="form-control" ng-class="(joinTeamForm.email.$dirty &&
joinTeamForm.email.$invalid) ? 'error' : ''" id="email" name="email"
placeholder="Enter Email Address" ng-required="true" ng-model="email">
</div>
<div class="form-group">
<label for="password">Password</label>
<div class="form-hint">To conform with our Strong Password policy, you are required to use
a sufficiently strong password. Password must be more than 7 characters.</div>
<input type="password" class="form-control" ng-class="(joinTeamForm.password.$dirty &&
joinTeamForm.password.$invalid) ? 'error' : ''" id="password" name="password"
placeholder="Enter Password" ng-required="true" ng-model="password">
</div>
<button type="submit" class="btn btn-primary" ng-disabled="joinTeamForm.$invalid">Submit</button>
</form>
In the preceding code, we have named our form as joinTeamForm
and have given names to the input elements making it possible for us to harness Angular’s built-in NgModelController
. We have also created data-bindings for the input elements using the ng-model
directive.
We used some of the validation state properties - $dirty
, $invalid
, $error
, provided by the NgModelController
API to determine if our form is fully validated. The ng-class
directive was also used to dynamically add an error
class to the input elements based on the validation criteria.
Also, we used ng-cloak
to prevent the browser from showing our error messages while rendering. Since we included angular.js
at the end of our page, this would not be effective. To correct this, we would add the following CSS rule to the main.css
file.
[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak {
display: none !important;
}
Now, we would add some CSS rules to the main.css
file for our error feedback.
.form-control.error {
border-color: red;
}
.form-hint.error {
color: #C00;
font-weight: bold;
font-size: 8pt;
}
At this point, our page should look like the following screenshot - observe that the form submit button is now disabled on page load.
Now we would go ahead to create the password strength meter. We would also create a new directive in our module called okPassword
, that will define a custom validation constraint for the password element, which will ensure that a valid password must be more than 7
characters and must have a minimum zxcvbn
score of 2
. We will also add visual feedback to keep track of the password length.
First, let’s add the following into our index.html
file, immediately after the password field element, for our password strength meter.
<div class="label password-count" ng-class="password.length > 7 ? 'label-success' : 'label-danger'"
ng-cloak>{{ password | passwordCount:7 }}</div>
<div class="strength-meter">
<div class="strength-meter-fill" data-strength="{{passwordStrength}}"></div>
</div>
Here, we are using the ng-class
directive and Bootstrap’s label
classes to provide feedback based on the password length. We are also using a custom passwordCount
filter to slightly format the display of the password length.
Also, we are binding the value for the data-strength
attribute to the passwordStrength
property of the controller’s scope, which will contain the password strength score.
Now, let’s add the following CSS rules to the main.css
file to style the password strength meter which we just created.
.password-count {
float: right;
position: relative;
bottom: 24px;
right: 10px;
}
.strength-meter {
position: relative;
height: 3px;
background: #DDD;
margin: 10px auto 20px;
border-radius: 3px;
}
.strength-meter:before, .strength-meter:after {
content: '';
height: inherit;
background: transparent;
display: block;
border-color: #FFF;
border-style: solid;
border-width: 0 5px 0 5px;
position: absolute;
width: 80px;
z-index: 10;
}
.strength-meter:before {
left: 70px;
}
.strength-meter:after {
right: 70px;
}
.strength-meter-fill {
background: transparent;
height: inherit;
position: absolute;
width: 0;
border-radius: inherit;
transition: width 0.5s ease-in-out, background 0.25s;
}
.strength-meter-fill[data-strength='0'] {
background: darkred;
width: 20%;
}
.strength-meter-fill[data-strength='1'] {
background: orangered;
width: 40%;
}
.strength-meter-fill[data-strength='2'] {
background: orange;
width: 60%;
}
.strength-meter-fill[data-strength='3'] {
background: yellowgreen;
width: 80%;
}
.strength-meter-fill[data-strength='4'] {
background: green;
width: 100%;
}
Here, we have made our password strength meter to indicate five levels for the different password strength scores ranging from 0
to 4
. We have also specified different colors and fill widths for each score level.
At this point, our page should look like the following screenshot. Observe that the page exhibits some strange behavior since we have not yet defined the passwordCount
filter.
Before we proceed, we would define the passwordCount
filter which slightly formats the display of the password length in the view. Let’s append the following to the app.js
file to create the filter.
// creating the passwordCount filter
angular.module('PasswordStrength').filter('passwordCount', [function() {
return function(value, peak) {
var value = angular.isString(value) ? value : '',
peak = isFinite(peak) ? peak : 7;
return value && (value.length > peak ? peak + '+' : value.length);
};
}]);
In the preceding code, the passwordCount
filter takes a string value and an optional peak
parameter which defaults to 7
if omitted or not a valid integer. If the length of the input string is less than the peak
, it returns the length of the string; otherwise, it returns {peak}+
.
Now we can get visual feedback for our password length as we type. See the following screenshot:
Now, we would go on to create the okPassword
directive for the password field. We would also create a service that will encapsulate the implementation of the zxcvbn()
function. Let’s add the following to the app.js
file to create the service.
// creating a service to provide zxcvbn() functionality
angular.module('PasswordStrength').factory('zxcvbn', [function() {
return {
score: function() {
var compute = zxcvbn.apply(null, arguments);
return compute && compute.score;
}
};
}]);
Here, we have created a service called zxcvbn
that provides just one API method score()
. The score()
method takes the same parameters as the zxcvbn()
function and calls it internally. It returns the estimated password strength score.
Now, we can apply this service to create our directive. Add the following to the app.js
file to create the directive.
// creating the okPassword directive with zxcvbn as dependency
angular.module('PasswordStrength').directive('okPassword', ['zxcvbn', function(zxcvbn) {
return {
// restrict to only attribute and class
restrict: 'AC',
// use the NgModelController
require: 'ngModel',
// add the NgModelController as a dependency to your link function
link: function($scope, $element, $attrs, ngModelCtrl) {
$element.on('blur change keydown', function(evt) {
$scope.$evalAsync(function($scope) {
// update the $scope.password with the element's value
var pwd = $scope.password = $element.val();
// resolve password strength score using zxcvbn service
$scope.passwordStrength = pwd ? (pwd.length > 7 && zxcvbn.score(pwd) || 0)
: null;
// define the validity criterion for okPassword constraint
ngModelCtrl.$setValidity('okPassword', $scope.passwordStrength >= 2);
});
});
}
};
}]);
In the preceding code, we defined the okPassword
directive with zxcvbn
service as a dependency. We specified that the directive can be used either as a class or an attribute. We also specified that we need the NgModelController
.
In the link
function, we added an event listener on the element which is triggered on blur
, change
, and keyup
events. The event listener uses the $scope.$evalAsync()
method to delay the update of the scope’s properties. Also, the $setValidity()
method of the NgModelController
was used to define the validity criterion for the okPassword
validation constraint.
Finally, we go ahead to add the ok-password
attribute or class to our password element to ensure that the validation constraint defined in our directive is applied.
<input type="password" class="form-control ok-password" ng-class="(joinTeamForm.password.$dirty &&
joinTeamForm.password.$invalid) ? 'error' : ''" id="password" name="password"
placeholder="Enter Password" ng-required="true" ng-model="password">
Now we can get visual feedback for our password strength as we type. See the following screenshot:
In this tutorial, we have been able to implement a password strength meter based on the zxcvbn
Javascript library in our AngularJS application. For a detailed usage guide and documentation of the zxcvbn
library, see the zxcvbn
repository on GitHub.
For a complete code sample of this tutorial, checkout the password-strength-demo repository on GitHub. You can also get a live demo of this tutorial on jsFiddle.
]]>In a previous tutorial we introduced a new library that allows us to animate elements along an SVG path
called PathSlider
. In addition, we put into practice the use of this library and developed a simple slider, with a minimum of effort. In this tutorial, we will see two more examples that illustrate the potentialities of our library and the SVG paths in general.
For example, we have developed another slider using a closed SVG path
as in the previous tutorial, but with some extra elastic effects:
https://codepen.io/lmgonzalves/pen/bvpLyW/
We also wanted to do something a little more original, and we created a full screen and responsive slider of images, using this time an open SVG path
, generated automatically with JavaScript:
https://codepen.io/lmgonzalves/pen/EEKEaM/
As you can see, the first of these sliders is very similar to the one in the previous tutorial, we have only added some elastic effects to give it a special touch. So in this tutorial, we will focus on developing the slider of images. However, the code of this first slider can also be found in the GitHub repository.
So, let’s start developing this interesting images slider!
The HTML code for our images slider will be even simpler than the one used for the other two sliders. Let’s see:
<!-- Path Slider Container -->
<div class="path-slider">
<!-- Slider items -->
<a href="#" class="path-slider__item path-slider__item--1"><div class="item__circle"></div></a>
<a href="#" class="path-slider__item path-slider__item--2"><div class="item__circle"></div></a>
<a href="#" class="path-slider__item path-slider__item--3"><div class="item__circle"></div></a>
<a href="#" class="path-slider__item path-slider__item--4"><div class="item__circle"></div></a>
<a href="#" class="path-slider__item path-slider__item--5"><div class="item__circle"></div></a>
</div>
As you can see, this time we have not defined the SVG path
in our HTML code. That is because we will generate it from the JavaScript code, something that will allow us greater flexibility, adapting the SVG path
to the dimensions of the screen.
As this time our slider will be full screen, we must add some necessary styles:
// This slider will be full screen
// The `background-image` will be set using JavaScript
.path-slider {
position: relative;
width: 100%;
height: 100%;
background-position: center;
}
// We also need this extra element (generated with JavaScript) to fade the images smoothly
.path-slider__background {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-position: center;
}
And the images corresponding to each of the elements of the slider have been defined in this way:
// Defining images
.path-slider__item--1 .item__circle {
background-image: url("../images/img1.jpg");
}
// ... More `background-image` definitions for each item
Please note that we have not emphasized the styles needed for the elements to be centered on the SVG path
, and the other general styles used. If you have any doubts about it you can take a look at the previous tutorial, and of course, you can also see the full code in the GitHub repository.
So let’s see how to bring our slider to life!
The first thing we will do is insert the SVG path
element that we need to move the slider items through it:
// Creating SVG and path elements and insert to DOM
var svgNS = 'http://www.w3.org/2000/svg';
var svgEl = document.createElementNS(svgNS, 'svg');
var pathEl = document.createElementNS(svgNS, 'path');
// The `getSinPath` function return the `path` in String format
pathEl.setAttribute('d', getSinPath());
pathEl.setAttribute('class', 'path-slider__path');
svgEl.appendChild(pathEl);
document.body.appendChild(svgEl);
As you may have noticed, we have generated the path
using the getSinPath
function, which is responsible for returning the path in String
format taking into account the dimensions of the screen and some other parameters. We have decoupled this function in a separate file for a better organization, and you can see its implementation, as well as a brief description of the available options, in the GitHub repository.
Now let’s see the code for getting the images of the slider items that we have defined in the CSS code, and also the code needed to smoothly switch the images every time we select an item:
// Changing `background-image`
// Firstly, saving the computed `background` of each item, as these are defined in CSS
// When item is selected, the `background` is set accordingly
var items = document.querySelectorAll('.path-slider__item');
var images = [];
for (var j = 0; j < items.length; j++) {
images.push(getComputedStyle(items[j].querySelector('.item__circle')).getPropertyValue('background-image'));
}
var imgAnimation;
var lastIndex;
var setImage = function (index) {
if (imgAnimation) {
imgAnimation.pause();
sliderContainer.style['background-image'] = images[lastIndex];
sliderContainerBackground.style['opacity'] = 0;
}
lastIndex = index;
sliderContainerBackground.style['background-image'] = images[index];
imgAnimation = anime({
targets: sliderContainerBackground,
opacity: 1,
easing: 'linear'
});
};
Then we need to add the extra element needed to fade the images smoothly, and also set the image for the initial current item (the first one):
var sliderContainer = document.querySelector('.path-slider');
var sliderContainerBackground = document.createElement('div');
sliderContainerBackground.setAttribute('class', 'path-slider__background');
setImage(0);
sliderContainer.appendChild(sliderContainerBackground);
And having all the above ready, we can initialize our slider with this simple piece of code:
// Initializing the slider
var options = {
startLength: 'center',
paddingSeparation: 100,
easing: 'easeOutCubic',
begin: function (params) {
// Item get selected, then set the `background` accordingly
if (params.selected) {
setImage(params.index);
}
}
};
var slider = new PathSlider(pathEl, '.path-slider__item', options);
As we explained in the previous tutorial, by default the PathSlider
library adds event listeners for click
events, so we don’t have to worry about that. All we have to do is to switch the images properly, using the setImage
function.
Finally, to get the path
adapting to the dimensions of the screen, thus achieving a responsive behavior, we just have to regenerate the SVG path
and update items position on resize
event:
// Regenerate the SVG `path` and update items position on `resize` event (responsive behavior)
window.addEventListener('resize', function() {
pathEl.setAttribute('d', getSinPath());
slider.updatePositions();
});
This way, our slider will look great in every screen size.
And we are done! We have put into practice once again the possibilities offered by the SVG paths to develop attractive and functional components.
Please go ahead and check the live demos
Play with the code on CodePen
We really hope you have enjoyed the tutorial and that it has been useful!
]]>Warning: For the latest information, refer to the documentation for installing and configuring GitLab or the 1-Click App for GitLab Enterprise Edition.
Okay – GitLab isn’t really your own self-hosted GitHub. I don’t believe GitLab or GitHub share any relationship besides both being Git Management Software, but it’s the best way I find to describe in laymen’s terms what GitLab is. GitLab is awesome. It’s featured packed, and it does nearly everything that GitHub does. Best of all, you get unlimited private repos with it (or technically as many as your server can handle).
I have some pretty good DevOps skills, but I’m not really a server guy. Until recently, I’ve never previously wanted to deal with the hassle of setting up my own Git server, and GitHub’s managed solution is really quite appealing. With GitHub, you have a reliable and easy solution that you never really have to worry about. It’s also very nicely integrated with a huge array of social features like forking and organizations amongst other collaboration tools. The only thing is it can get expensive real fast if you need more than a handful of private repositories.
DigitalOcean has recently made it very simple and straightforward to set up GitLab with minimal effort and fully supporting one-click restorable backups. They also even provide great resources and tutorials on it:
This post will be very similar to those articles, but I’ll be going through it step-by-step in more detail as well as some improvements and notes of my own. Feel free to read below or go straight to the DigitalOcean docs themselves.
The first thing you’ll need to do is signup with DigitalOcean.
DigitalOcean automatically will provision your server with the public keys you upload to your account. This step isn’t really required, but it makes it easier and faster to access your new server environment.
If you don’t know much about servers - don’t worry. DigitalOcean will make this very easy for us, and they’ll actually “automatically” do most of the work for us.
For this, use the domain (or subdomain) that you would like to use. For example, you could do gitlab.scotch.io
.
The official recommendation for GitLab can be found here. In summary, your server should have:
However, I’ve found that GitLab still works well even if you don’t meet these requirements. If you select the smallest Droplet, GitLab will occasionally freeze or hang. This is usually fixed with a quick reboot of the server. I recommend the smallest Droplet you select is their $10/month plan. I have found no problems yet running this with a small team for both work and play.
Select the region that you would like your server to be in. You should select a region that is closest to you to reduce latency.
The next step is to select the GitLab application image provided by DigitalOcean. Selecting this basically means that GitLab will automatically be installed when the server is provisioned.
Select the Public SSH Key you added from earlier. This will allow you to SSH into the server without needing a password. Selecting this also means that DigitalOcean won’t send you a root password when the Droplet is created.
The last step is to enable backups. Even though Git is a distributed version control system, I still would enable this so that you can easily recover your Git repos if anything unexpected happens.
Now that we have created our Droplet and before we do anything, lets SSH in and make some minor setting changes. To get the IP address of your Droplet, just navigate to your Droplets in the DigitalOcean backend. After you find it, open the terminal and connect with it via SSH (no password will be required since we are using public and private keys):
- ssh root@<IP Address>
Note: You can visit the IP address in your browser now to see GitLab is working. Don’t worry about that just yet, we’ll get to that soon enough.
This helps remove warnings when rebooting GitLab. We’re going to add the hostname of your Droplet to the hosts file. To do this, open /etc/hosts
with your favorite editor:
- vim /etc/hosts
Then, add your IP address, a tab, and the hostname on line 2 of that file:
- <IP Address> gitlab.scotch.io
We’ll need to set some default settings and globals for GitLab. These are things like the base URL and default support email. Open the /home/git/gitlab-shell/config.yml
and update the variable gitlab_url
:
- vim /home/git/gitlab-shell/config.yml
After that, we’ll need to update one last file. Open up /home/git/gitlab/config/gitlab.yml
in your editor:
- vim /home/git/gitlab/config/gitlab.yml
You can go through this file and make adjustments to customize your installation. Most of these are just default settings. For the most part, you should leave it as is, but you’ll need to change host
under GitLab settings to your domain name.
host: gitlab.scotch.io
Then, you need to set the default From and Admin emails:
email_from: yo-its-me-gitlab@gitlab.scotch.io
support_email: nick@scotch.io
To finalize everything, reboot GitLab with the following command:
service gitlab restart
The final step is to now point the A record of either the domain or subdomain to your DigitalOcean IP address. I use CloudFlare to manage my domains, but your setup should look very similar to the below screenshot:
If everything worked out correctly and DNS propagated, you can now visit the URL in your browser to access your new git server!
You also can now SSH into your Droplet with the following command:
- ssh root@gitlab.scotch.io
Now that you have successfully navigated to your URL, you’ll need to log in. The default login for this image is:
Username: admin@local.host
Password: <PASSWORD>
You probably noticed this already when you were SSH-ing into the server. This appears in the welcome message.
After you log in, you’ll be prompt to change the password, then you’ll need to log in again with the same email and the new password. Once you have logged in successfully, you’ll need to change the default Admin email to your email address. To do this, navigate to the Admin Panel located here:
/admin/users/root/edit
Next, we’ll need to update the default username (which is currently root
). To do this, navigate to:
/profile/account
After you make that change, you are all set on configuring the server!
GitLab has a whole bunch of cool features. I won’t go over everything in detail because a lot of it is self-explanatory, but some of the things you should consider playing around with once logged in are:
If GitLab ever hangs or freezes on you, you can try rebooting it with:
- service gitlab restart
If that doesn’t work, you can reboot your entire Droplet through the command line with:
- shutdown -h now
or
- sudo reboot
When you do a power cycle from the command line like this, DigitalOcean will take a snapshot of your Server in case you need to safely restore. If that doesn’t work, you can just login to your DigitalOcean account and reboot the Droplet through their backend.
If you ever need to restore from a backup, all you have to do is log into the backend of DigitalOcean, select your Droplet, and click Restore from Backup.
That’s all there is to get your own little private Git server. GitLab is an amazing tool, and the awesome guys over at DigitalOcean have made it really straightforward to set one up with backups, hardened security, “unlimied” privated repos, and more.
I’m a huge fan of DigitalOcean and their services, and I highly recommend that you use them. GitHub is really worth every penny, but if you’re in need of a ton of private repos on a tight budget, GitLab might be the best solution for you.
]]>Building applications with React can be overwhelming even after you’ve understood the elegant philosophy behind it. More so, managing large applications with React can be confusing at first. The ecosystem has grown with great libraries to save you some nightmares. But that also makes it difficult at first to figure out which library to use.
In this two-part tutorial, we’ll build and deploy a media library app. The application allows users to search and display images and short videos from external services (Flickr and Shutterstock). It would also allow users to select images and videos for preview.
We will build this application with:
We will be using Yahoo’s Flickr API and ShutterStock API for images and short videos respectively.
This tutorial assumes you have a basic understanding of JavaScript and React. Don’t worry if you have none. We will walk through and build the application from the ground up.
Part 1 of this tutorial would cover basic React setup with create-react-app package
, organizing our project workflow, defining routes, and of course testing it out.
In Part 2, we will be using Redux and its async libraries; we will set it up and then integrate it into our application. Finally, we will deploy our application to Heroku for sharing with our friends. Our application would look thus when we’re done.
Our app will be structured to allow you to either contribute to it or use it as a sample boilerplate for bootstrapping your React/Redux applications.
There are loads of React boilerplate out there to help you get started with React. But we’ll be using create-react-app authored by the Facebook team. It allows you to create React applications with no configuration. create-react-app provides developers with the benefits of a more complex setup out of the box.
Let’s get started…
First, install the package globally:
- npm install -g create-react-app
Then, create the media-library
application:
- create-react-app media-library
Bam. Our React basic setup is complete with scripts to start, build, and eject. Take a look at your package.json
.
Let’s test it out.
- cd media-library
- npm start
Now, we can structure our project directory and add other dependencies.
- npm install --save redux redux-saga react-router@2.4 react-redux
Then, remove the default sample app:
- rm -rf src/**
Media-library
- public
- favicon.ico
- index.html
- src
- Api
- api.js
- actions
- mediaActions.js
- common
- Header.js
- components
- HomePage.js
- PhotoPage.js
- VideoPage.js
- constants
- actionTypes.js
- containers
- App.js
- MediaGalleryPage.js
- reducers
- imageReducer.js
- index.js
- initialState.js
- videoReducer.js
- sagas
- mediaSaga.js
- index.js
- watcher.js
- styles
- style.css
- store
- configureStore.js
- routes.js
- index.js
- package.json
If the project directory looks verbose, just be patient, and let’s walk-through. The intent of the project structure is to allow you to extend the application’s functionality beyond this tutorial. This would help you stay organized moving forward.
Note: If you’re new to Redux, I recommend Lin Clark’s article on A Cartoon Intro To Redux..
What the heck is happening up there?
When the store receives an updated state, it transmits to the view layer to be rerendered.
Now that we understand the workflow, let’s dive into coding.
import React from 'react';
import { Link, IndexLink } from 'react-router';
const Header = () => (
<div className="text-center">
<nav className="navbar navbar-default">
<IndexLink to="/" activeClassName="active">Home</IndexLink>
{" | "}
<Link to="library" activeClassName="active">Library</Link>
</nav>
</div>
);
export default Header;
Link allows you to navigate to different routes in your application.
IndexLink is the same as Link with the exception of OnlyActiveOnIndex prop set on it.
import React from 'react';
import { Link } from 'react-router';
// Home page component. This serves as the welcome page with link to the library
const HomePage = () => (
<div className="jumbotron center">
<h1 className="lead">Welcome to Media Library built with React, Redux, and Redux-saga </h1>
<div>
<Link to="library">
<button className="btn btn-lg btn-primary"> Visit Library</button>
</Link>
</div>
</div>
);
export default HomePage;
import React, { Component, PropTypes } from 'react';
import Header from '../common/Header';
// The parent component renders the Header component and component(s) in the
// route the user navigates to.
class App extends Component {
render() {
return (
<div className="container-fluid text-center">
<Header />
{this.props.children}
</div>
);
}
}
App.propTypes = {
children: PropTypes.object.isRequired
};
export default App;
App component is the parent component of our app. Every other component is a child to it. this.props.children is where other child components are rendered.
We will implement the library route and the component that maps to it in Part 2 of this tutorial.
You would notice that for Header and HomePage components, we’re using stateless functional component. This approach allows us to separate our presentational components from the container components.
It’s a good practice as it enforces functional composition and component reusability. Whereas container components are responsible for your business logic and connecting with the store, presentational components are responsible for the look of your view.
Simply put, presentational components are components whose purpose in life is to render values to the DOM. Container components also known as smart components provide props and behavior to presentational components.
Let’s wire up our project routes.
import React from 'react';
import { Route, IndexRoute } from 'react-router';
import App from './containers/App';
import HomePage from './components/HomePage';
// Map components to different routes.
// The parent component wraps other components and thus serves as the entrance to
// other React components.
// IndexRoute maps HomePage component to the default route
export default (
<Route path="/" component={App}>
<IndexRoute component={HomePage} />
</Route>
);
Now let’s add the entrance to our application - index.js.
import React from 'react';
import ReactDOM from 'react-dom';
import { Router, browserHistory } from 'react-router';
import routes from './routes';
// We require the routes and render to the DOM using ReactDOM API
ReactDOM.render(
<Router history={browserHistory} routes={routes} />,
document.getElementById('root')
);
We pass in our routes and browserHistory as props to Router here. browserHistory uses your browser’s History API to create a clean and real URL without the gibberish that comes with using hashHistory. hashHistory has its use case, though.
Router is a high-level API that keeps your UI and URL in sync. It ensures that required props are passed whenever you change URL.
ReactDOM is the API for mounting our application on the DOM node(root, in our own case).
Two more things before we test our app.
Add a bootstrap link to a CDN in our public/index.html
.
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>
Let’s add some custom styling.
body {
margin: 0;
padding: 0;
font-family: Helvetica, Arial, Sans-Serif, sans-serif;
background: white;
}
.title {
padding: 2px;
text-overflow-ellipsis: overflow;
overflow: hidden;
display: block;
}
.selected-image, .select-video {
height: 500px;
}
.selected-image img, .select-video video {
width: 100%;
height: 450px;
}
.image-thumbnail, .video-thumbnail {
display: flex;
justify-content: space-around;
overflow: auto;
overflow-y: hidden;
}
.image-thumbnail img, .video-thumbnail video {
width: 70px;
height: 70px;
padding: 1px;
border: 1px solid grey;
}
Let’s test our app now…
- npm start
Navigate to http://localhost:3000
on your browser.
Bam!!! We’re up again
Building application with React gets better as you understand the flow. In this part, we did:
In the second part of this tutorial, we will be exploring the power of Redux, Redux-saga, and separating our state management system from the React components for scalability and maintainability.
]]>Tired of having to refresh your browser every single time you make changes to your LESS/SASS/CSS files? This article will take you step-by-step to getting LiveReload integrated in your development environment so you no longer have to reload your browser to see changes. For this tutorial, we’ll be using Gulp. If you’re unfamiliar with it, check out these awesome and super easy Scotch resources about it:
LiveReload is an amazing piece of software that can really help improve your workflow - especially when it comes to CSS. The purpose of this article is to get you started in that direction as quickly and as easily as possible. In this article, we’ll cover:
Imagine a “happy land” where browsers don’t need a Refresh button. Well, that’s exactly what Andrey Tarantsov set out accomplish. LiveReload monitors changes in your file system (for example, your CSS or your images). As soon as a change is detected, such as a simple “save”, the browser is refreshed automatically. In this example, we will be examining LiveReload through editing LESS files.
We’ll be setting up LiveReload with Gulp. A lot of people immediately think that you have to purchase this software. The truth is, using LiveReload with Gulp is completely free to use. The creators of LiveReload sell a for-purchase app that makes it ridiculously easy to use.
LiveReload can also be utilized through other task runners, such as Grunt and Yeoman. And of course, we can use LiveReload for when we change our Javascript. I talk about its implementation towards the end of the tutorial.
Lastly, it’s you may see that LiveReload is called LiveReload 3 on it’s GitHub repo compared to LiveReload 2 in the Chrome Store. There is no need to worry about conflicting versions, the way we are implementing it will work perfectly regardless of version.
Change directory cd
into the folder where your gulpfile.js
and package.json
is located via the command line. Once you are there, enter in the following command:
- npm install --save-dev gulp-livereload
Next, we need to download the Google Chrome extension LiveReload, go to the Chrome Store and download it here. Make sure you can view it in your tool bar and that the circle is filled in with black. This is important or else it won’t work.
Use incognito to manage sessions? You can enable Livereload in incognito by navigating to “More Tools”, clicking “Extensions”, and then checking the box to allow it in incognito mode.
Now go to the ‘build-css’ task and type in .pipe(plugins.livereload());
after the .pipe(gulp.dest('build')).on('error', gutil.log)
. The ‘build-css’ task in it’s entirety should look like this…
gulp.task('build-css', function() {
return gulp.src('assets/less/*.less')
.pipe(plugins.plumber())
.pipe(plugins.less())
.on('error', function (err) {
gutil.log(err);
this.emit('end');
})
// .pipe(plugins.cssmin())
.pipe(plugins.autoprefixer(
{
browsers: [
'> 1%',
'last 2 versions',
'firefox >= 4',
'safari 7',
'safari 8',
'IE 8',
'IE 9',
'IE 10',
'IE 11'
],
cascade: false
}
))
.pipe(gulp.dest('build')).on('error', gutil.log)
.pipe(plugins.livereload());
});
Put plugins.livereload.listen();
at the top line of watch
task. It should look like this:
plugins.livereload.listen();
gulp.watch('assets/js/libs/**/*.js', ['squish-jquery']);
gulp.watch('assets/js/*.js', ['build-js']);
gulp.watch('assets/less/**/*.less', ['build-css']);
Now, run Gulp in the command line and make sure everything is okay. If an error happens, most likely you had a missing or extra semicolon.
Now go ahead and make a change to your LESS file and see as you save the command the Gulp tasks will run, but a new one will appear. It will be the directory of your project and at the end build/style.css
reloaded.
Go to your browser and you should see the browser has refreshed on save. If it hasn’t, I cannot stress enough that the black hole needs to be filled for the LiveReload Google Extension, or else it will not work.
Want to be pro status? Add .pipe(plugins.livereload());
after the build (.pipe(gulp.dest('build’)))
for your JavaScript in the build-js
task so you LiveReload after you save your .js
file - it’s only one extra line.
LiveReload can be really powerful when used with Gulp. Remember, do not forget to add plugins.
before your LiveReload function call, unless you are not utilizing the gulp-load-plugins
plugin. Don’t listen to the other tutorials who say you need to add your port or host in the Gulp file. You do not need any parameters in your livereload.listen()
, everything is set up! All in all only a couple lines of extra JavaScript so you do not have to do COMMAND+R
anymore.
When it comes to bundler, Webpack seems to be the de-facto bundler within the Vue.js community. In this tutorial, I will be showing you how to use Parcel in a Vue.js application completely from scratch.
Parcel is a blazingly fast, zero-configuration web application bundler. If you have ever used Webpack prior to version 4, then this will be a relief.
In addition to this, Parcel has out-of-the-box support for JS, CSS, HTML, file assets, etc, with no plugins needed, and it builds all these assets in a quick bundle time.
To get started using Parcel, we need to first install the Parcel bundler on our computer. We can do so by using the command below:
- // using NPM
- npm install -g parcel-bundler
- // using Yarn
- yarn global add parcel-bundler
Here, we install the Parcel bundler as a global dependence. We can also install the Parcel bundler per project:
- // using NPM
- npm install --save-dev parcel-bundler
- // using Yarn
- yarn add --dev parcel-bundler
Once that is installed, we can start using it by simply running the command below:
- parcel index.html
Now let’s see how we can use Parcel in a Vue.js app. We’ll start by creating a new project:
- mkdir vue-parcel
- cd vue-parcel
- npm init -y
We create a new directory (vue-parcel
) that will hold our Vue.js app, then we initialize NPM, which will create a package.json
with some default details.
Next, let’s install the dependencies needed for our app:
- npm install --save vue
- npm install --save-dev parcel-bundler
We install Vue.js and the Parcel bundler.
Now, we can begin to flesh out the application. Within the project directory, create a new index.html
file and paste the code below in it:
<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Vue Parcel</title>
</head>
<body>
<div id="app"></div>
<!-- built files will be auto injected -->
<script src="./src/main.js"></script>
</body>
</html>
Some pretty standard HTML. We add a div
with an id of app
and also a script tag that links to a JavaScript file, which we are yet to create. The main.js
will serve as the main JavaScript file for our app and index.html
file will be used as the entry point for Parcel.
Note: Be sure to use a relative path when linking the main JavaScript file.
Next, let’s create the main.js
file. Within the project’s root, create a new src
directory. Then within the src
directory, create a new main.js
and paste the following code into it:
// src/main.js
import Vue from 'vue'
import App from './App'
new Vue({
el: '#app',
render: h => h(App)
})
First, we import Vue.js and App
component (which we’ll create shortly). Then we create a new instance of Vue, passing to it the element we want to mount it on. Here, we are using a render function, and we pass the App
component to it.
Next, let’s create the App
component. Within src
, create a new App.vue
file and paste the code below in it:
// src/App.vue
<template>
<div class="container">
<h1>{{ message }}</h1>
</div>
</template>
<script>
export default {
name: 'App',
data() {
return {
message: 'Using Parcel In A Vue.js App',
};
},
};
</script>
<style scoped>
.container {
width: 600px;
margin: 50px auto;
text-align: center;
}
</style>
Here, we create a basic component that simply displays a message.
With our app complete, let’s run Parcel to compile and build our app. Before we do just that, let’s add a dev
script to package.json
:
// package.json
"scripts": {
"dev": "parcel index.html"
}
We can now run Parcel with:
- npm run dev
This will install the necessary dependencies (@vue/component-compiler-utils
and vue-template-compiler
) it needs to build the app, then build up the app and start a dev server. The server will be running at http://localhost:1234
, and you should get something similar to the image below:
If we want to use the full build (runtime + compiler) of Vue.js instead, as opposed to the runtime-only build used above, which might look like below:
// src/main.js
import Vue from 'vue';
import App from './App';
new Vue({
el: '#app',
template: '<App/>',
components: { App }
})
Then we need to add the code below to package.json
:
// package.json
"alias": {
"vue": "./node_modules/vue/dist/vue.common.js"
}
Now, if we run Parcel, everything should work as expected.
In addition to the start script, we can also create scripts to watch and automatically rebuild as files changes while developing and bundle our application for production respectively:
// package.json
"scripts": {
...,
"watch": "parcel watch index.html",
"production": "parcel build index.html"
}
Note: watch
mode doesn’t start a web server, so you need to have your own server.
That’s it! In this tutorial, we looked at what Parcel is and how we can use it in a Vue.js application. For more details about Parcel, kindly check their documentation.
]]>For websites like ours, code
blocks and pre
tags are necessities. Making these code blocks look good and function well is a big part of having your tutorial or example understood and easily digestible by your users.
We’ve been asked quite a few times what tool we use for syntax highlighting here at Scotch. Here it is!
Today we’ll be looking at a great tool that some of you may have heard of: PrismJS. Prism is a simple, lightweight, and easy-to-use syntax highlighter. It is easily customizable and has support for some plugins to extend its functionality.
Here’s a quick example:
<p>For websites like ours, <code>code</code> blocks and <code>pre</code> tags are necessities. Making these code blocks look good and function well is a big part of having your tutorial or example understood and easily digestible by your users.</p>
<p>We've been asked quite a few times what tool we use for syntax highlighting here at Scotch. Here it is!</p>
<p>Today we'll be looking at a great tool that some of you may have heard of: <a href="http://prismjs.com" target="_blank">PrismJS</a>. Prism is a simple, lightweight, and easy-to-use syntax highlighter. It is easily customizable and has support for some plugins to extend its functionality.</p>
code
tag. Some other highlighters just tell you to use pre
. Prism makes you use both for good code. It also uses the HTML5 recommended way of defining a language using class="language-xxxx"
.Implementing Prism into your site is an extremely easy process. Just link to the css
and the js
files and start highlighting!
Go get your download from the Prism website.
Once you have the files you have configured Prism to your needs, download and include the files into your project.
Now we will include these files in our project.
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Look At Me Prism-ing</title>
<link rel="stylesheet" href="css/prism.css">
<script src="js/prism.js"></script>
</head>
<body>
</body>
</html>
That’s it. Now we are ready to use Prism.
After you have included the necessary files, using Prism is very easy. All you have to do is add a pre
and code
tag to your site. Then add a class to your code
tag and you have beautiful syntax highlighting.
<pre>
<code class="language-markup">
look at my html stuff here
</code>
</pre>
Just like that, you have beautiful syntax highlighting. Notice how we use language-markup
to highlight HTML files. Here are the different classes to use for the different languages.
Language | Class |
---|---|
HTML | language-markup |
CSS | language-css |
JavaScript | language-javascript |
CoffeeScript | language-coffeescript |
PHP | language-php |
Ruby | language-ruby |
Go | language-go |
Prism lets you extend the features using plugins and it has some great ones ready to go.
Highlight a specific line in your code. Use the data-line
attribute on your pre
tag.
<pre data-line="4, 6, 10-13">
<code class="language-css">
body { background:#F2F2F2; }
h1, h2, h3, h4, h5, h6 { font-family:'Raleway'; }
.container { width:90%; }
</code>
</pre>
Add line numbers to your code blocks. Do this by adding a class to your pre
tag.
pre class="line-numbers"
Using Prism is a quick and easy way to get beautiful syntax highlighting for your code. There are other alternatives out there, but we feel that Prism does the job well and is incredibly easy to use.
If you have any favorite tools for showing off code or anything similar, sound off in the comments.
]]>Note: Part one of a three-part series.
Build a RESTful JSON API With Rails 5 - Part One Build a RESTful JSON API With Rails 5 - Part Two Build a RESTful JSON API With Rails 5 - Part Three
In part two of this tutorial, we added token-based authentication with JWT (JSON Web Tokens) to our todo API.
In this final part of the series, we’ll wrap with the following:
When building an API whether public or internal facing, it’s highly recommended that you version it. This might seem trivial when you have total control over all clients. However, when the API is public-facing, you want to establish a contract with your clients. Every breaking change should be a new version. Convincing enough? Great, let’s do this!
In order to version a Rails API, we need to do two things:
Rails routing supports advanced constraints. Provided an object that responds to matches?
, you can control which controller handles a specific route.
We’ll define a class ApiVersion
that checks the API version from the request headers and routes to the appropriate controller module. The class will live in app/lib
since it’s non-domain-specific.
- # create the class file
- touch app/lib/api_version.rb
Implement ApiVersion
class ApiVersion
attr_reader :version, :default
def initialize(version, default = false)
@version = version
@default = default
end
# check whether version is specified or is default
def matches?(request)
check_headers(request.headers) || default
end
private
def check_headers(headers)
# check version from Accept headers; expect custom media type `todos`
accept = headers[:accept]
accept && accept.include?("application/vnd.todos.#{version}+json")
end
end
The ApiVersion
class accepts a version and a default flag on initialization. In accordance with Rails constraints, we implement an instance method matches?
. This method will be called with the request object upon initialization.
From the request object, we can access the Accept
headers and check for the requested version or if the instance is the default version. This process is called content negotiation. Let’s add some more context to this.
REST is closely tied to the HTTP specification. HTTP defines mechanisms that make it possible to serve different versions (representations) of a resource at the same URI. This is called content negotiation.
Our ApiVersion
class implements server-driven content negotiation where the client (user agent) informs the server what media types it understands by providing an Accept HTTP header.
According to the Media Type Specification, you can define your own media types using the vendor tree i.e., application/vnd.example.resource+json
.
The vendor tree is used for media types associated with publicly available products. It uses the “vnd” facet.
Thus, we define a custom vendor media type application/vnd.todos.{version_number}+json
giving clients the ability to choose which API version they require.
Cool, now that we have the constraint class, let’s change our routing to accommodate this.
Since we don’t want to have the version number as part of the URI (this is argued as an anti-pattern), we’ll make use of the module scope to namespace our controllers.
Let’s move the existing todos and todo-items resources into a v1
namespace.
Rails.application.routes.draw do
# For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html
# namespace the controllers without affecting the URI
scope module: :v1, constraints: ApiVersion.new('v1', true) do
resources :todos do
resources :items
end
end
post 'auth/login', to: 'authentication#authenticate'
post 'signup', to: 'users#create'
end
We’ve set the version constraint at the namespace level. Thus, this will be applied to all resources within it. We’ve also defined v1
as the default version; in cases where the version is not provided, the API will default to v1
.
In the event we were to add new versions, they would have to be defined above the default version since Rails will cycle through all routes from top to bottom searching for one that matches
(till method matches?
resolves to true).
Next up, let’s move the existing todos and items controllers into the v1
namespace. First, create a module directory in the controllers folder.
- mkdir app/controllers/v1
Move the files into the module folder.
- mv app/controllers/{todos_controller.rb,items_controller.rb} app/controllers/v1
That’s not all, let’s define the controllers in the v1 namespace. Let’s start with the todos controller.
module V1
class TodosController < ApplicationController
# [...]
end
end
Do the same for the items controller.
module V1
class ItemsController < ApplicationController
# [...]
end
end
Let’s fire up the server and run some tests.
- # get auth token
- http :3000/auth/login email=foo@bar.com password=foobar
- # get todos from API v1
- http :3000/todos Accept:'application/vnd.todos.v1+json' Authorization:'ey...AWH3FNTd3T0jMB7HnLw2bYQbK0g'
- # attempt to get from API v2
- http :3000/todos Accept:'application/vnd.todos.v2+json' Authorization:'ey...AWH3FNTd3T0jMB7HnLw2bYQbK0g'
In case we attempt to access a nonexistent version, the API will default to v1
since we set it as the default version. For testing purposes, let’s define v2
.
Generate a v2
todos controller
- rails g controller v2/todos
Define the namespace in the routes.
Rails.application.routes.draw do
# For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html
# module the controllers without affecting the URI
scope module: :v2, constraints: ApiVersion.new('v2') do
resources :todos, only: :index
end
scope module: :v1, constraints: ApiVersion.new('v1', true) do
# [...]
end
# [...]
end
Remember, non-default versions have to be defined above the default version.
Since this is test controller, we’ll define an index controller with a dummy response.
class V2::TodosController < ApplicationController
def index
json_response({ message: 'Hello there'})
end
end
Note the namespace syntax, this is shorthand in Ruby to define a class within a namespace. Great, now fire up the server once more and run some tests.
- # get todos from API v1
- http :3000/todos Accept:'application/vnd.todos.v1+json' Authorization:'eyJ0e...Lw2bYQbK0g'
- # get todos from API v2
- http :3000/todos Accept:'application/vnd.todos.v2+json' Authorization:'eyJ0e...Lw2bYQbK0g'
Voila! Our API responds to version 2!
At this point, if we wanted to get a todo and its items, we’d have to make two API calls. Although this works well, it’s not ideal.
We can achieve this with serializers. Serializers allow for custom representations of JSON responses. Active model serializers make it easy to define which model attributes and relationships need to be serialized. In order to get todos with their respective items, we need to define serializers on the Todo model to include its attributes and relationships.
First, let’s add active model serializers to the Gemfile:
# [...]
gem 'active_model_serializers', '~> 0.10.0'
# [...]
Run bundle to install it:
- bundle install
Generate a serializer from the todo model:
- rails g serializer todo
This creates a new directory app/serializers
and adds a new file todo_serializer.rb
. Let’s define the todo serializer with the data that we want it to contain.
class TodoSerializer < ActiveModel::Serializer
# attributes to be serialized
attributes :id, :title, :created_by, :created_at, :updated_at
# model association
has_many :items
end
We define a whitelist of attributes to be serialized and the model association (only defined attributes will be serialized). We’ve also defined a model association to the item model, this way the payload will include an array of items. Fire up the server, let’s test this.
- # create an item for todo with id 1
- http POST :3000/todos/1/items name='Listen to Don Giovanni' Accept:'application/vnd.todos.v1+json' Authorization:'ey...HnLw2bYQbK0g'
- # get all todos
- http :3000/todos Accept:'application/vnd.todos.v1+json' Authorization:'ey...HnLw2bYQbK0g'
This is great. One request to rule them all!
Our todos API has suddenly become very popular. All of a sudden everyone has something to do. Our data set has grown substantially. To make sure the requests are still fast and optimized, we’re going to add pagination; we’ll give clients the power to say what portion of data they require.
To achieve this, we’ll make use of the will_paginate gem.
Let’s add it to the Gemfile:
# [...]
gem 'will_paginate', '~> 3.1.0'
# [...]
Install it:
- bundle install
Let’s modify the todos controller index action to paginate its response.
module V1
class TodosController < ApplicationController
# [...]
# GET /todos
def index
# get paginated current user todos
@todos = current_user.todos.paginate(page: params[:page], per_page: 20)
json_response(@todos)
end
# [...]
end
The index action checks for the page number in the request params. If provided, it’ll return the page data with each page having twenty records each. As always, let’s fire up the Rails server and run some tests.
- # request without page
- http :3000/todos Accept:'application/vnd.todos.v1+json' Authorization:'eyJ0...nLw2bYQbK0g'
- # request for page 1
- http :3000/todos page==1 Accept:'application/vnd.todos.v1+json' Authorization:'eyJ0...nLw2bYQbK0g'
- # request for page 2
- http :3000/todos page==2 Accept:'application/vnd.todos.v1+json' Authorization:'eyJ0...nLw2bYQbK0g'
The page number is part of the query string. Note that when we request the second page, we get an empty array. This is because we don’t have more than 20 records in the database.
Let’s seed some test data into the database.
Add faker and install faker gem. Faker generates data at random.
# [...]
gem 'faker'
# [...]
In db/seeds.rb
let’s define seed data.
# seed 50 records
50.times do
todo = Todo.create(title: Faker::Lorem.word, created_by: User.first.id)
todo.items.create(name: Faker::Lorem.word, done: false)
end
Seed the database by running:
- rake db:seed
Awesome, fire up the server and rerun the HTTP requests. Since we have test data, we’re able to see data from different pages.
Congratulations for making it this far! We’ve come a long way! We’ve gone through generating an API-only Rails application, setting up a test framework, using TDD to implement the todo API, adding token-based authentication with JWT, versioning our API, serializing with active model serializers, and adding pagination features.
Having gone through this series, I believe you should be able to build a RESTful API with Rails 5. Feel free to leave any feedback you may have in the comments section below. If you found the tutorial helpful, don’t hesitate to hit that share button. Cheers!
]]>Warning: Moment.js is no longer actively maintained. And ngx-moment
has replaced angular-moment
.
Displaying time relatively has become popular in the past few years. This can be seen across social networks like Twitter and Facebook.
For example, instead of displaying the time of a post like 8:12am, the time will be displayed as 3 hrs.
This helps our users see time relatively and makes it easier to think about how long ago an update was. We’ll be looking at how we can achieve this effect in Angular.
While Angular already comes with some great filters to help us deal with displaying times and dates, it doesn’t come with a way to display time relatively out of the box.
The package that will help us display time relatively is angular-moment. This package uses the awesome Moment.js library. If you haven’t used Moment before, definitely give it a look through; it can help with all sorts of scenarios where you have to work with date and time in JavaScript.
Related Reading: All About the Built-In AngularJS Filters: Date and Time
There are a few ways to install this package. For this tutorial, we’ll just be grabbing from a CDNJS.
All we have to do is add the following lines to our project:
<!-- load momentJS (required for angular-moment) -->
<script src="//cdnjs.cloudflare.com/ajax/libs/moment.js/2.9.0/moment.min.js"></script>
<!-- load angular-moment -->
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-moment/0.9.0/angular-moment.min.js"></script>
MomentJS is a requirement to use this package so that must be included in your project.
Here’s a very quick Angular application to demonstrate the different ways this package can be used. All we need to show off relative times is an Angular module, Angular controller, and a variable for the time.
We’ll be working inside a CodePen for this. If you’d like to follow along, go ahead and create your own CodePen. When working in CodePen, make sure that you load your assets through the JS settings.
Make sure that Angular is selected and that you link to the two resources needed: moment
and angular-moment
.
Here is the code for the Angular side of things. Place this in the JS tab.
// create an angular app
angular.module('timeApp', ['angularMoment'])
// create an angular controller
.controller('mainController', function() {
// bind the controller to vm (view-model)
var vm = this;
// create a new time variable with the current date
vm.time = new Date();
});
We have created a new time here using new Date()
. You can also pass in your date/time into Date()
to convert it to a date object. With our data and Angular application ready, let’s move onto the HTML tab to see how we can display this vm.time
variable to our users.
We have to apply our Angular module (timeApp
) and our Angular controller (mainController
) to our application, so let’s start our view with:
<!-- apply our app and controller -->
<div class="container" ng-app="timeApp" ng-controller="mainController as main">
<div class="jumbotron">
<p>The time is {{ main.time }}</p>
</div>
<!-- show our relative times here -->
</div>
At the very top of our document, we are going to show the time to see what we are starting with. At the time of this writing, I see:
The time is “2015-02-04T05:49:33.190Z”
The main ways that we can use this package are as a directive and as a filter. Let’s demonstrate both ways.
Here is the bare minimum we need to use angular-moment
as a directive.
<time am-time-ago="main.time"></time>
The am-time-ago
will automatically update the time. When you first see it, you will see a few seconds ago. If you wait a little longer, you’ll see 3 minutes ago.
In addition to using the directive, we also have the ability to use a filter when displaying time.
When using a filter, you can declare the exact format that you’d like to see:
<time>{{ main.time | amDateFormat: 'dddd, MMMM Do YYYY, h:mm a' }}</time>
The above would display: Tuesday, February 3rd 2015, 9:49 pm
You can pass in any format you like to get the date exactly how you’d like it. That looks a lot better than the 2015-02-04T05:49:33.190Z that we started out with.
The third way that you can use the angular-moment package is the calendar format. Moment comes with calendar time which is a little different than the time ago example from earlier.
The calendar format will show different strings based on how close the time is to a certain time (usually now). What this means is that if a date/time was yesterday, you will see Yesterday 9:49pm. If you had a time from a week ago, the calendar time would display Last Monday 9:49pm. If the time has gone beyond a week, then you will just see the normal date (7/10/2011).
<time>{{ main.time | amCalendar }}</time>
The above code would display: Today at 9:49 PM.
When using relative time, it’s important to still provide the exact time information to your users. If you look at Twitter, it can be confusing since there are so many updates happening in real-time. You could see people having a conversation and each update could say 1 min ago. You would have no idea who said what when!
A good practice when using relative time is to define a title
on your <time>
tags so that a user has the ability to hover and see the exact time.
For example, we can use the angular-moment filter method to add a title:
<time title="{{ main.time | amDateFormat: 'dddd, MMMM Do YYYY, h:mm a' }}">{{ main.time | amCalendar }}</time>
Now when you hover over this time, you will be able to see the time.
Hopefully, this simple but powerful package will help you when displaying times to your users. This is a friendly way to provide more context to your users as they browse your application or your site.
]]>HTTP Middlewares provide a convenient mechanism for filtering HTTP requests entering your application. Laravel, for example, has a middleware for verifying a user’s authentication.
These are some cases where I have had to resort to using middleware. There are many more cases where you would like to use a middleware.
By the end of this article, you should be able to create a middleware, register it, and use it in your projects. We will be illustrating the creation till usage of middlewares by creating one of our own. Our middleware will enable maintenance either site-wide or on some routes.
Thanks to artisan
, creating middlewares in Laravel is easy. All we need to do is open a terminal in the project root and run the following command.
- php artisan make:middleware <MiddlewareName>
Replace <MiddlewareName>
with the actual name of the middleware.
This command creates our middleware class in app/Http/Middleware
. To create our own middleware (which we will call DownForMaintenance
, we can.
- php artisan make:middleware DownForMaintenance
So we can now open our middleware class and add our logic to the middleware. Remember, our middleware handles site maintenance mode. We need to first import the HttpException
first. At the top of the file, we can do this
use Symfony\Component\HttpKernel\Exception\HttpException;
In the handle
method of our middleware, we can just do this.
public function handle($request, Closure $next)
{
throw new HttpException(503);
}
By throwing this exception, Laravel knows to load the 503.blade.php
file. This should contain the message for maintenance mode.
Now that we’ve created a middleware, we need to let the application know the middleware exists. If you want a middleware to run on every request, go to app/Http/kernel.php
and add the middleware FQN to Kernel
class $middleware
property.
protected $middleware = [
...
\App\Http\Middleware\DownForMaintenance::class
];
By doing this, the user will see a message for every page they visit.
If you want the middleware to trigger on some routes, we can name the middleware and use that as a reference mechanism to add it to some routes. To name the middleware, while still in the app/Http/kernel.php
, add the keyed property to the $routeMiddleware
array. The array key is the name of the middleware, while the value should be the FQN of the middleware.
protected $routeMiddleware = [
...
'down.for.maintenance' => \App\Http\Middleware\DownForMaintenance::class,
...
];
Take this route for example,
Route::get('posts/{something}', 'PostController@getPost');
the getPost
method on the PostController
class fires when the url matches posts/{something}
.
We could add our down.for.maintenance
middleware by changing the second parameter of Route::get
to an array that contains a middleware property and closure which processes the route.
Route::get('posts/{something}', ['middleware' => 'grown.ups.only', function () {
return "Only big boys/girls can see this.";
}]);
By doing this, only routes matching posts/{something}
will show the down for maintenance error.
Another to add middleware to routes is to call a middleware
method on the route definition. Like this.
Route::get('posts/{something}', function () {
//
})->middleware(['first', 'second']);
Passing parameters to middlewares is quite easy. Say, for example, our middleware validates the role of a user before allowing access to a page. We could also pass a list of allowed roles to the middleware.
To pass a parameter to our middleware, after the $request
and Closure
parameters, we then add our variables.
public function handle($request, Closure $next, $role)
{
if (! $request->user()->hasRole($role)) {
// Redirect...
}
return $next($request);
}
To pass the variable, when attaching our middleware to routes, we do this.
Route::put('post/{id}', ['middleware' => 'role:editor', function ($id) {
//
}]);
At times, there might be a bunch of middlewares you apply to a couple of routes. It would be better if we could combine or group middlewares. This allows us to reuse that group of middlewares.
To group middlewares, we add a $middlewareGroups
property (if it does not exist) to the Kernel
class. This property takes a key-value pair array. The key represents the group name, and the value is an array of middleware FQNs. By default, Laravel provides a web
, and api
group.
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
],
'api' => [
'throttle:60,1',
'auth:api',
],
];
If we need to use this group in our application, we can then do this.
Route::group(['middleware' => ['web']], function () {
//
});
If you noticed, throughout this article, we perform an action then execute our middleware. But we could also execute our middleware first before returning a response.
public function handle($request, Closure $next)
{
$response = $next($request);
/**
* Perform actions here
*/
return $response;
}
By doing this, we first execute our middleware then we perform our action and return the response.
The use cases mentioned above are just a few examples. You can create a middleware to do so much more than what I have listed above.
]]>Note: Updates:
{
instead of a [
for a link.The web is growing. It is growing so fast that if you don’t catch up you might not be able to shine bright in your career as a web developer.
A few months ago I started hearing about Web Components from the mouths of so-called professional developers. I knew it was one of those new things that “I don’t really need now, but I must learn it for the future”. Unfortunately, for poor me, Angular, which has always been my favorite JavaScript framework, decided to componentize everything.
This concept is one of the standards of the web that can boast of being easy to grasp.
Basically, web components give room for you to simply bundle these tags with their styles and scripts as a reusable component that is exposed via one tag (instead of littering your HTML with similar confusing tags).
A component is basically just a grouping of HTML/JS/CSS all in one.
Web Components are a new standard that will be here to stay because it is widely accepted. The problem is that we web devs know how this new stuff rolls.
Web Components is widely accepted, but not widely supported on browsers, and this is where Angular 2 comes in.
Angular 2 is the “new guy”. It implements components at its core and makes it simple to work with components in our daily web projects.
I understand that the reason why a lot of us have yet to embrace this “new guy” is because learning Angular 1 was not fun and now we are hearing of another version with a different concept. The good news is Angular 2 is actually simple and you just need 4-5 days to start doing wonders with it.
Now enough talking. Let’s have a little fun making a CSS-based (and I really mean CSS-based) carousel component with Angular 2.
npm
has become so popular that, recently, tutorials forget to remind us to install them. Therefore, kindly install Node so as to get its package manager, npm.
Create a folder in your favorite directory. This will be located probably in your projects folder or desktop. Name it angular2-carousel-component
. Navigate via your CLI to this created folder.
Create package.json
and tsconfing.json
at the root of the folder with the following contents:
package.json:
{
"name": "angular2-quickstart",
"version": "1.0.0",
"scripts": {
"tsc": "tsc",
"tsc:w": "tsc -w",
"lite": "lite-server",
"start": "concurrent \"npm run tsc:w\" \"npm run lite\" "
},
"license": "ISC",
"dependencies": {
"angular2": "2.0.0-beta.0",
"systemjs": "0.19.6",
"es6-promise": "^3.0.2",
"es6-shim": "^0.33.3",
"reflect-metadata": "0.1.2",
"rxjs": "5.0.0-beta.0",
"zone.js": "0.5.10"
},
"devDependencies": {
"concurrently": "^1.0.0",
"lite-server": "^1.3.1",
"typescript": "^1.7.3"
}
}
tsconfig.json
{
"compilerOptions": {
"target": "ES5",
"module": "system",
"moduleResolution": "node",
"sourceMap": true,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"removeComments": false,
"noImplicitAny": false
},
"exclude": [
"node_modules"
]
}
The tsconfig.json
file is a configuration file for TypeScript (more on that later) which the properties are explained here in the “Appendix: TypeScript configuration” section. We can go ahead and install the dependencies by running:
- npm install
Add two more folders, app
and images
. Our main application codes will be in the app
folder, but you can copy the images for the slides here into the images
folder.
We are all set to make our component, but before we do that, let us explain the CSS-based carousel and how it is implemented.
Carousels are one of the most popular concepts in the web, but it has its own price. There are heavy images to display and scripts to manipulate the image slides.
How about we just endure only the pain caused by these images and leave out the troubles of JavaScript? How about we make a Carousel with just CSS? How possible is that?
Actually making Carousels has never been easy until I read Harry Roberts article on his blog. Before you get so excited about this simplicity, be sure that your Carousel will be dead simple and basic, and won’t have many controls. Just be rest assured that it will be a RESPONSIVE “carousel”.
Assuming we have a template as shown below:
<div class="carousel">
<ul class="slides">
<li>
<h2>We are covered</h2>
<img src="images/covered.jpg" alt="">
</li>
<li>
<h2>Generation Gap</h2>
<img src="images/generation.jpg" alt="">
</li>
<li>
<h2>Potter Me
<img src="images/potter.jpg" alt="">
</li>
<li>
<h2>Pre-School Kids</h2>
<img src="images/preschool.jpg" alt="">
</li>
<li>
<h2>Young Peter Cech</h2>
<img src="images/soccer.jpg" alt="">
</li>
</ul>
</div>
Harry’s idea is to make the .slides
wrapper have a width that is equal to the number of slides multiplied by the viewport. So, assuming our viewport is 100% and we have 5 items, the width of the slides should be 500% so as to contain all the slides when aligned horizontally.
We may leave the CSS implementation for now and see it while making the carousel component.
Angular 2 uses Typescript, which from a general perspective is a semi-language that compiles to JavaScript. It implements everything JS, but just gives that feel of strictness that C-based languages have that JS does not have. Don’t be overwhelmed, it is just a “cool JavaScript”.
Microsoft has a TypeScript plugin for Sublime which will help you with hinting, syntax highlighting, and many other those goodies
There are also many other TypeScript plugins for various editors which makes using TypeScript easy no matter what environment you work in.
As the idea is to make strict applications, let us first define an interface for the images collection. The interface is just like in any other language, it is a signature that we must adhere to if we care to use it. In the app
folder, add a file named image.interface.ts
:
export interface Image {
title: string;
url: string;
}
You do not have to understand what interfaces are before you can follow this tutorial.
We just exported an Image Interface that will serve as a signature for the collection of our images. This is typically saying that if you want to make an image collection that implements that interface, it must have a title
and url
property of type string.
Now that we have a signature, we can go ahead and create the carousel component. Create carousel.component.ts
in the app
directory:
// Import Component form the angular core package
import {Component} from 'angular2/core';
// Import the Image interface
import {Image} from './image.interface';
// Compoent Decorator
@Component({
//Name of our tag
selector: 'css-carousel',
//Template for the tag
template: `
<div class="carousel">
<ul class="slides">
<li *ngFor="#image of images">
<h2>{{image.title}}</h2>
<img src="{{image.url}}" alt="">
</li>
</ul>
</div>
`,
//Styles for the tag
styles: [`
.carousel{
overflow:hidden;
width:100%;
}
.slides{
list-style:none;
position:relative;
width:500%; /* Number of panes * 100% */
overflow:hidden; /* Clear floats */
/* Slide effect Animations*/
-moz-animation:carousel 30s infinite;
-webkit-animation:carousel 30s infinite;
animation:carousel 30s infinite;
}
.slides > li{
position:relative;
float:left;
width: 20%; /* 100 / number of panes */
}
.carousel img{
display:block;
width:100%;
max-width:100%;
}
.carousel h2{
margin-bottom: 0;
font-size:1em;
padding:1.5em 0.5em 1.5em 0.5em;
position:absolute;
right:0px;
bottom:0px;
left:0px;
text-align:center;
color:#fff;
background-color:rgba(0,0,0,0.75);
text-transform: uppercase;
}
@keyframes carousel{
0% { left:-5%; }
11% { left:-5%; }
12.5% { left:-105%; }
23.5% { left:-105%; }
25% { left:-205%; }
36% { left:-205%; }
37.5% { left:-305%; }
48.5% { left:-305%; }
50% { left:-405%; }
61% { left:-405%; }
62.5% { left:-305%; }
73.5% { left:-305%; }
75% { left:-205%; }
86% { left:-205%; }
87.5% { left:-105%; }
98.5% { left:-105%; }
100% { left:-5%; }
}
`],
})
//Carousel Component itself
export class CSSCarouselComponent {
//images data to be bound to the template
public images = IMAGES;
}
//IMAGES array implementing Image interface
var IMAGES: Image[] = [
{ "title": "We are covered", "url": "images/covered.jpg" },
{ "title": "Generation Gap", "url": "images/generation.jpg" },
{ "title": "Potter Me", "url": "images/potter.jpg" },
{ "title": "Pre-School Kids", "url": "images/preschool.jpg" },
{ "title": "Young Peter Cech", "url": "images/soccer.jpg" }
];
When we ran the npm install
command, we pulled the Angular 2 package into our folder. The first line is importing the Angular core library. We also imported the Image interface that we created earlier as we will make use of it here. Notice that we do not have to add the .ts
extension when importing.
The file is exporting a CSSCarouselComponent
class which has a public property of an array of images implementing image interface. The class also has a @Component
decorator which is specifying the meta-properties of this class. The selector is the name we want the tag to have and the template is the HTML for the component and styles, which is the CSS trick we played to get our carousel working.
Note: Angular 2 supports 3 types of styles which are template-inline, component-inline, and external CSS. Just like the normal way, you can add styles directly to template tags which are referred to as template-inline. This is unlike old times, an acceptable practice, because of encapsulation.
Component-inline is what we just implemented in our demo above while external can be achieved by just replacing the component-inline styles with: [styleUrls: 'style.css']
Next up is to create our app
component which just serves as a parent. The app
component is like the building in the illustration I made while introducing this article. It has the same skeleton as the carousel component. Create app.component.ts
in app
folder with the following contents:
//import Component from angular core
import {Component} from 'angular2/core';
//import our Carousel Component
import {CSSCarouselComponent} from './carousel.component';
//@Component decorator
@Component({
//tag
selector: 'my-app',
//template
template: `
<div class="wrapper">
<css-carousel></css-carousel>
</div>
`,
//css
styles: [`
.wrapper{
width: 60%;
margin: 60px auto;
}
`],
//tell angular we are using the css-carousel tag in this component
directives: [CSSCarouselComponent]
})
//actual class
export class AppComponent { }
The major difference here is the directives
property in the @Component
decorator which is an array of all the imported components that we will use on this component. Notice we already imported the CSSCarouselComponent
after importing angular’s core library.
We can now boot up the app. All there is to do when booting is to import angular, import the app to boot, and boot with the bootstrap()
method. Create a file in app
with the name boot.ts
:
//Import Angular core
import {bootstrap} from 'angular2/platform/browser'
//Import App Component
import {AppComponent} from './app.component'
//Boot
bootstrap(AppComponent);
As usual, an index.html entry point is needed for our cool app. Create one on the root and update it with:
<!-- head -->
<title>CSS Carousel Angular 2 Compopnent</title>
<!-- 1. Load libraries -->
<!-- #docregion libraries -->
<!-- #docregion ie-polyfills -->
<!-- IE required polyfills, in this exact order -->
<script src="node_modules/es6-shim/es6-shim.min.js"></script>
<script src="node_modules/systemjs/dist/system-polyfills.js"></script>
<!-- #enddocregion ie-polyfills -->
<script src="node_modules/angular2/bundles/angular2-polyfills.js"></script>
<script src="node_modules/systemjs/dist/system.src.js"></script>
<script src="node_modules/rxjs/bundles/Rx.js"></script>
<script src="node_modules/angular2/bundles/angular2.dev.js"></script>
<!-- #enddocregion libraries -->
<!-- 2. Configure SystemJS -->
<!-- #docregion systemjs -->
<script>
System.config({
packages: {
app: {
format: 'register',
defaultExtension: 'js'
}
}
});
System.import('app/boot')
.then(null, console.error.bind(console));
</script>
<!-- #enddocregion systemjs -->
<!-- body -->
<my-app>Loading...</my-app>
Some JavaScript libraries are included, but you should care more about System.js which is a third-party library that adds ES6 module loading functionality across browsers. As seen in the file it helps load our bootstrap file app/boot
in the index.html file.
You can run the app via your CLI with:
- npm start
Web components are going to trend whether we are ready for it or not. Even if we are to keep trends aside, web components are really an appreciated and accepted concept. Some tools like Polymer and React already started a good job, but if you love angular like me, then Angular 2 is an awesome option.
One last thing, forget the competition among these tools, just stick to what you can afford because they all meet the basic requirements you need to make a web component application.
]]>The content of an HTML document can be very long and difficult to access only through the scroll. Because of this arduous task, developers often use internal links (page jumps) as an alternative mode of transport around the page. This useful technique has been improved with the help of Javascript to offer a better experience, primarily by offering soft jumps and then introducing the so-called Scrollspy scripts.
A Scrollspy is used to automatically update links in a navigation list based on scroll position.
Through this tutorial, we’ll be building a custom Scrollspy component. See exactly what we are going to build below:
Also, you can take a look at the working DEMO.
To accomplish this custom Scrollspy we will be using:
Along with the tutorial, we’ll be explaining some features we use of these libraries, but it’s a good idea to check the Github repositories, for basic understanding.
Let’s start with the HTML structure we’ll be using, describing the key elements in the comments:
<section>
<!-- Fixed header -->
<!-- The [data-gumshoe-header] attribute tell Gumshoe that automatically offset it's calculations based on the header's height -->
<!-- The [data-scroll-header] attribute do the same thing but for Smooth Scroll calculations -->
<header class="page-header" data-gumshoe-header data-scroll-header>
<div class="page-nav">
<!-- Nav and links -->
<!-- The [data-gumshoe] attribute indicates the navigation list that Gumshoe should watch -->
<nav data-gumshoe>
<!-- Turn anchor links into Smooth Scroll links by adding the [data-scroll] data attribute -->
<a data-scroll href="#eenie">Eenie</a>
<a data-scroll href="#meanie">Meanie</a>
<a data-scroll href="#minnie">Minnie</a>
<a data-scroll href="#moe">Moe</a>
</nav>
<!-- Arrows -->
<a class="nav-arrow nav-arrow-left"><svg class="icon"><use xlink:href="#arrow-up"/></svg></a>
<a class="nav-arrow nav-arrow-right"><svg class="icon"><use xlink:href="#arrow-down"/></svg></a>
</div>
</header>
<!-- Page content -->
<main class="page-content">
<section>
<h2 id="eenie"><a data-scroll href="#eenie">Eenie</a></h2>
<p>Lorem ipsum dolor sit amet, has dico eligendi ut.</p>
<!-- MORE CONTENT HERE -->
</section>
</main>
</section>
With the HTML ready, we are all set to add some style. Let’s see the key style pieces commented briefly:
h2 {
/* This is to solve the headbutting/padding issue. Read more: https://css-tricks.com/hash-tag-links-padding/ */
/* 110px = 80px (fixed header) + 30px (additional margin) */
&:before {
display: block;
content: " ";
margin-top: -110px;
height: 110px;
visibility: hidden;
}
}
/* Fixed header */
.page-header {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 80px; /* The height of fixed header */
background-color: #2D353F;
text-align: center;
z-index: 2;
}
/* Content container */
.page-content {
display: inline-block; /* This is for clearing purpose. */
margin: 80px 50px 30px; /* Margin top = 80px because of fixed header */
}
/* Nav container */
.page-nav {
display: inline-block;
position: relative;
margin-top: 20px;
height: 40px; /* This is the same height of each link */
width: 400px;
max-width: 100%; /* Responsive behavior */
overflow: hidden; /* Only current link visible */
background-color: #427BAB;
}
/* Nav and links */
nav {
position: relative;
width: 100%;
line-height: 40px;
text-align: center;
background-color: rgba(0, 0, 0, 0.05);
a {
display: block;
font-size: 18px;
color: #fff;
outline: none;
}
}
As we will be working closely with the DOM, we need to get all the elements we need first. Also, we will declare the additional variables we will be using.
// Init variables
var navOpen = false;
var pageNav = document.querySelector('.page-nav');
var navEl = document.querySelector('.page-nav nav');
var navLinks = document.querySelectorAll('.page-nav nav a');
var arrowLeft = document.querySelector('.nav-arrow-left');
var arrowRight = document.querySelector('.nav-arrow-right');
var navHeight = 40;
var activeIndex, activeDistance, activeItem, navAnimation, navItemsAnimation;
The following is a key part of the puzzle. This function translates the nav
element to show only the selected link, using the activeIndex
value.
// This translate the nav element to show the selected item
function translateNav(item) {
// If animation is defined, pause it
if (navItemsAnimation) navItemsAnimation.pause();
// Animate the `translateY` of `nav` to show only the current link
navItemsAnimation = anime({
targets: navEl,
translateY: (item ? -activeIndex * navHeight : 0) + 'px',
easing: 'easeOutCubic',
duration: 500
});
// Update link on arrows, and disable/enable accordingly if first or last link
updateArrows();
}
Then, we need a way to open and close the nav
. The open state should let us see all the links and allow us to select one of them directly. The close state is the default one, letting see only the selected link.
// Open the nav, showing all the links
function openNav() {
// Updating states
navOpen = !navOpen;
pageNav.classList.add('nav-open');
// Moving the nav just like first link is active
translateNav();
// Animate the `height` of the nav, letting see all the links
navAnimation = anime({
targets: pageNav,
height: navLinks.length * navHeight + 'px',
easing: 'easeOutCubic',
duration: 500
});
}
// Close the nav, showing only the selected link
function closeNav() {
// Updating states
navOpen = !navOpen;
pageNav.classList.remove('nav-open');
// Moving the nav showing only the active link
translateNav(activeItem);
// Animate the `height` of the nav, letting see just the active link
navAnimation = anime({
targets: pageNav,
height: navHeight + 'px',
easing: 'easeOutCubic',
duration: 500
});
}
Now let’s see how we handle the events. We need handlers to open or close the nav
accordingly.
// Init click events for each nav link
for (var i = 0; i < navLinks.length; i++) {
navLinks[i].addEventListener('click', function (e) {
if (navOpen) {
// Just close the `nav`
closeNav();
} else {
// Prevent scrolling to the active link and instead open the `nav`
e.preventDefault();
e.stopPropagation();
openNav();
}
});
}
// Detect click outside, and close the `nav`
// From: http://stackoverflow.com/a/28432139/4908989
document.addEventListener('click', function (e) {
if (navOpen) {
var isClickInside = pageNav.contains(e.target);
if (!isClickInside) {
closeNav();
}
}
});
We are ready to let Gumshoe and Smooth Scroll do the magic. See how we are initializing them:
// Init Smooth Scroll
smoothScroll.init({
// This `offset` is the `height` of fixed header
offset: -80
});
// Init Gumshoe
gumshoe.init({
// The callback is triggered after setting the active link, to show it as active in the `nav`
callback: function (nav) {
// Check if active link has changed
if (activeDistance !== nav.distance) {
// Update states
activeDistance = nav.distance;
activeItem = nav.nav;
activeIndex = getIndex(activeItem);
// Translate `nav` to show the active link, or close it
if (navOpen) {
closeNav();
} else {
translateNav(activeItem);
}
}
}
});
And we are done! You can see it working here.
For the sake of clarity, we have commented only the most important parts of the code. But you can get it all from this GitHub repo.
We really hope you have enjoyed it and found it useful!
]]>If you have built a React app at any point in your programming career, you probably have experienced an error at some point that was cryptic and did not provide any meaningful context on what actually happened.
This probably means the error occurred at some point in the application and our React component did not handle the error gracefully, well mostly because it was none of its business to make sure the entire app holds.
A JavaScript error in a part of the UI shouldn’t break the whole app. To solve this problem for React users, React 16 introduces a new concept of an “error boundary”.
An Error Boundary is a React component that catches errors within its children and does something meaningful with them such as post them to an error logging service or display a fallback UI for the specific child while maintaining the rest of the React app’s sanity.
Therefore, for a block of functionality to be covered by Error Boundaries, it has to be a child of one in the first place.
Before we get started on an example of how you can use React 16, beware that Error Boundaries will not catch errors in:
try / catch
block instead within event handlers.
2. Asynchronous code**Lifecycle methods are special functions that are invoked at different stages in the life of a component. These stages can be categorized into Mounting, Updating, Unmounting and Error handling.
For a component to be considered an Error Boundary, it has to make use of the componentDidCatch()
lifecycle method to handle errors. It works in the same way that Javascript’s try/catch
works.
the componentDidCatch
method is invoked with the error
and info
parameters that contain more context on the thrown error.
Just like the granularity of React component is entirely up to you, Error Boundaries can be as specific as you want them to be. You can have a top-level Error Boundary that prevents the app from crashing due to unexpected occurrences and displaying a more suitable message.
For this article, we will be creating an Error Boundary that only wraps a specific functionality with errors of our own design.
To get started, we will create a simple Component that only allows you to enter up to five characters into an input field. Any more than that and we break the internet. Feel free to try out in of the freely available online editors, I personally use CodePen.io.
class FiveMax extends React.Component {
constructor(props) {
super(props);
this.state = { value: ''}
this.handleChange = this.handleChange.bind(this);
}
handleChange(e) {
this.setState({ value: e.target.value})
}
render() {
if(this.state.value.length > 5) {
throw new Error('You cannot enter more than five characters!');
}
return (
<div>
<label>Type away: </label>
<input type="text" value={this.state.value} onChange={this.handleChange} />
</div>
);
}
}
ReactDOM.render(<FiveMax />, document.getElementById('root'));
If you type in more than five characters, you should get a big shiny error on your console.
You will also notice that your input box disappears due to the error. Perfect! Let’s create an Error Boundary. I’ll call mine Shield.
class Shield extends React.Component {
constructor(props) {
super(props);
// Add some default error states
this.state = {
error: false,
info: null,
};
}
componentDidCatch(error, info) {
// Something happened to one of my children.
// Add error to state
this.setState({
error: error,
info: info,
});
}
render() {
if(this.state.error) {
// Some error was thrown. Let's display something helpful to the user
return (
<div>
<h5>Sorry. More than five characters!</h5>
<details style={{ whiteSpace: 'pre-wrap' }}>
{this.state.info.componentStack}
</details>
</div>
);
}
// No errors were thrown. As you were.
return this.props.children;
}
}
Nothing special right? The only new thing that we have in this component that you probably haven’t used before is the componentDidCatch
method.
When an error is thrown in any of its children, the error state will be updated and a meaningful error displayed. Otherwise, the Shield
will go ahead and display the children as usual.
To start using the Error Boundary, we will look at two separate scenarios.
In a situation where you have two Components within the same Error Boundary and an error is thrown in one of the Components, they are both affected due to the structure of the render
method in our Shield
component.
To try this out, we will add two FiveMax
components inside our Shield
Error Boundary.
// Shield Component
// FiveMax Component
function App() {
return (
<div>
<h3>Two children under one error boundary. If one crashes. Both are affected!</h3>
<Shield>
<FiveMax />
<FiveMax />
</Shield>
</div>
);
}
ReactDOM.render(<App />, document.getElementById('root'));
When you try typing more than five characters into any of the fields, an error is logged on the console and we get a pleasing and more informative message displayed to the user in place of both components.
This is all good but we did not need to lose the other component that did not throw any errors. Let’s fix that!
Now to prevent what happened in scenario one, we will have each of the FiveMax
components in their own Shield
Error Boundary.
// Shield Component
// FiveMax Component
function App() {
return (
<div>
<h3>Two children, each with their own Error Boundary. One crashes, the other is not affected</h3>
<Shield><FiveMax /></Shield>
<Shield><FiveMax /></Shield>
</div>
);
}
ReactDOM.render(<App />, document.getElementById('root'));
Now try typing more than five characters in any of the components. Notice anything? Instead of losing both components to the error, you only lose the Component that is affected. In place of the affected component only. The rest of the App remains intact!
You can try out both scenarios in the Pen below.
See the Pen Five Max by John Kariuki (@johnkariuki) on CodePen.
How you use error logs entirely depends on what you want to achieve. You can have a separate Error Boundary for your navigation bar and a different one for the rest of your app so that if something goes wrong in your App’s functionality, the navigation remains usable!
Remember that if your Error Boundary throws an error, anything in its child component tree is affected. So be careful when coming up with one so that it is not prone to errors.
]]>Vue.js is simple. It is so simple that people often dismiss it as only suitable for small projects. While it is true the Vue.js core is just a view layer library, there are in fact a collection of tools that will enable you to build full-blown, large-scale SPAs (Single Page Applications) using Vue.js with a pleasant development experience.
If you are already familiar with the basics of Vue.js but feel that the world of SPA is scary, this series is for you. We will first introduce the concepts, tools, and libraries needed in this first article, and then will walk you through the full process of building an example app in the rest of the series.
Single-Page Applications (SPAs) are web apps that load a single HTML page and dynamically update that page as the user interacts with the app. SPAs use AJAX and HTML5 to create fluid and responsive Web apps, without constant page reloads.
As stated in the above description taken from Wikipedia, the main advantage of SPAs is that the app can respond to user interactions without fully reloading the page, resulting in a much more fluid user experience.
As a nice side effect, a SPA also encourages the backend to focus on exposing data endpoints, which makes the overall architecture more decoupled and potentially reusable for other types of clients.
From the developer’s perspective, the main difference between SPAs and a traditional backend-rendered app is that we have to treat the client-side as an application with its own structure. Typically we will need to handle routing, data fetching and persistence, view rendering, and the necessary build setup to facilitate a modularized codebase.
For a Vue.js-based SPA, here are the tools and libraries that we will use to fill in these gaps:
Let’s take a closer look at each part.
This series assumes you are already familiar with the basics of Vue.js. If you are not, you should be able to quickly pick it up by going through official guide and other tutorials available.
The core concept when using Vue.js for large SPAs is dividing your application into many nested, self-contained components. We also want to carefully design how these components interact with one another by leveraging component props for the data flow and custom events for communication. By doing so, we dissect the complexity into small, decoupled units that are tremendously easier to maintain.
The official vue-router library handles client-side routing, and supports both hash mode and HTML5 history mode. It is a bit different from standalone routing libraries in that it deeply integrates with Vue.js and makes the assumption that we are mapping nested routes to nested Vue components.
When using vue-router
, we implement components that serve as “pages”, and within these components we can implement hook functions that are called when the route changes.
State management is a topic that only arises when your application’s complexity grows beyond a certain level. When you have multiple components that need to share and mutate application state, it can get very hard to reason about and maintain if you don’t have a layer in your application that is dedicated to managing such shared state.
This is where Vuex comes in. You don’t necessarily need Vuex if your application is relatively simple - but if you are interested, here’s an excellent intro on what problem it solves by Anirudh Sanjeev.
We will be working with a RESTful backend in the example, so we are using the vue-resource plugin which is maintained by the PageKit team. Do note that Vue.js SPAs are backend-agnostic and can basically work with any data fetching solution you prefer, for example fetch, restful.js, Firebase or even Falcor.
This is probably the biggest hurdle that you’ll have to jump through if you are not familiar with the frontend build tool scene, and we will try to explain it here. Feel free to skip this section if you are already experienced with Webpack.
First, the entire build toolchain relies on Node.js, and we will be managing all our library and tool dependencies using npm. Although npm started out as the package manager for Node.js backend modules, it is now widely used for front-end package management, too. Because all npm packages are authored using the CommonJS module format, we need special tooling to “bundle” these modules into files that are suitable for final deployment. Webpack is exactly such a tool, and you may have also heard of a similar tool called Browserify.
We will be using Webpack for the series because it provides more advanced functionalities out of the box, such as hot-reloading, bundle-splitting, and static asset handling.
Both Webpack and Browserify exposes APIs that allow us to load more than just CommonJS modules: for example, we can directly require()
an HTML file by transforming it into a JavaScript string.
By treating everything for your frontend including HTML, CSS, and even image files as module dependencies that can be arbitrarily transformed during the bundling process, Webpack actually covers most of the build tasks that you will encounter when building a SPA. We are primarily going to build the example using Webpack and plain NPM scripts, without the need for a task runner like Gulp or Grunt.
We will also be using vue-loader which enables us to author Vue components in a single file:
<template>
<h1 class="red">{{msg}}</h1>
</template>
<script>
export default {
data () {
return {
msg: 'Hello world!'
}
}
}
</script>
<style>
.red {
color: #f00;
}
</style>
In addition, the combination of Webpack and vue-loader
gives us:
ES2015 by default. This allows us to use future JavaScript syntax today, which results in more expressive and concise code.
Embedded pre-processors. You can use your pre-processors of choice inside single-file Vue components, for example using Jade for the template and SASS for the styles.
CSS output inside Vue components are auto-prefixed. You can also use any PostCSS plugins you like.
Scoped CSS. By adding a scoped
attribute to the <style>
, vue-loader will simulate scoped CSS by rewriting the template and style output so that the CSS for a specific component will not affect other parts of your app.
Hot Reload. When editing a Vue component during development, the component will be “hot-swapped” into the running app, maintaining the app state without having the reload the page. This greatly improves the development experience.
Now with all these fancy features, it could be a really daunting task to assemble the build stack yourself! Luckily, Vue provides vue-cli
, a command-line interface that makes it trivially easy to get started:
- npm install -g vue-cli
- vue init webpack my-project
Answer the prompts, and the CLI will scaffold a project with all the aforementioned features working out of the box. All you need to do next is:
- cd my-project
- npm install # install dependencies
- npm run dev # start dev server at http://localhost:8080
For full details on what is included in the generated project, check out the project template documentation.
We haven’t really written any real app code so far, but I hope we have got you excited about learning more.
In the next article, Ryan Chenkie will start taking us through a series of building a full-fledged SPA using this stack. Stay tuned!
]]>Icon fonts are great tools for building applications and websites nowadays. They have a great many benefits over fixed-sized icons like:
These are simple benefits, but so powerful when used in a real-world application.
Icon fonts are incredibly easy to use. All you have to do is load the CSS file and you’re good to go!
For example, in order to use FontAwesome fonts, we just have to load the CSS file and apply the appropriate classes to either an <i>
or <span>
tag.
<!-- load the CSS file -->
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css">
<!-- use the icon font -->
<!-- use i or span tag -->
<i class="ti-fire"></i>
<span class="ti-user"></span>
Let’s get into the best icon fonts you can use for your projects today.
Bonus: SVG Developer Icons by Scotch
All these sets of icons are unique in their own way. Mix and match them as you create your own customized application or website.
]]>The approach of this article is this:
You are a seasoned Node.js developer and are looking to learn a new language, but you don’t want to go deep, just see how things compare with your current expertise, and then make a final decision.
Chances are, you were initially a PHP or Python developer, then you found your way into Node.js and now you feel like you want to expand your expertise. Of course, I might be wrong, but in this article, we are going to look at some common patterns when working with Node.js and how they compare in Go.
The most common reasons to learn Go include the following:
Here’s an example with the net/http package in the go documentation. Examples are provided, and there are even tabs for the source code files. Sweet!
Here’s a “hello world” program in Node.js
console.log('Hello World');
Running this will produce Hello World in the terminal.
- node app.js
Here’s an equivalent hello world program in Go
package main
import "fmt"
func main() {
fmt.Println("Hello World")
}
Go follows a certain structure which involves:
This code can be run by typing in the following or view it in GoPlay Space.
- go run main.go
where main.go
is the name of the file.
JavaScript is dynamically typed, which means you do not have to specify types when defining variables, and the variables can change their types as you program along.
That being said, JavaScript has the following types:
To define variables in JavaScript (Node.js) we’d write this:
const num = 3; // declaring a constant number
let flt = 2.34; // declaring a number (float)
let a = true; // declaring a boolean
let b = false;
var name = 'Scotch'; // declaring a string
Go, however, is statically typed, which means we have to define the types beforehand, or assign them and let them be inferred. Here’s a comprehensive list of Go Types.
An equivalent of the above JavaScript definitions in Go is
package main
const num = 3 // declaring a constant
func main() {
var flt float64 = 2.34 // declaring a float
var name string = "Scotch" // declaring a string
a, b := true, false, // shorthand declaration syntax
}
Assigning initial values in Go is done with the var
keyword, a variable name, and a type. Additionally and more commonly the :=
shorthand syntax can be used. When declaring variables with :=
, they are automatically assigned the correct type. Additionally, you can assign multiple values in one line as we have done in a, b := true, false
. This will assign both a
and b
to the right-hand values respectively.
Here is an array of strings in JavaScript
const names = ['Scotch', 'IO'];
While Go has arrays, what we typically refer to as arrays in JavaScript are referred to as slices in Go. Here’s an example of a slice in Go:
package main
func main() {
names := []string{"Scotch", "IO"}
}
As an example, we’ll write a small program that returns the substring of a string at index 10.
So a sentence like Luke, I'm not your father
will end up being Luke, I'm
JavaScript
const sentence = '`Luke, I\'m not your Father';`
console.log(sentence.substr(0,10));
Go
package main
import "fmt"
func main() {
sentence := "Luke, I'm not your Father"
fmt.Println(sentence[:10])
}
You can run the app in Goplay Space
Conditional statements include if else
and switch
statements. Here’s an example in Node.js.
const cats = 10;
if (cats > 10) {
console.log('you have many cats');
} else {
console.log('you have few cats');
}
const cat_fur = "calico";
switch (cat_fur) {
case 'tabby':
console.log('tabby cat');
break;
case 'calico':
console.log('calico cat');
break;;
default:
///
}
Here’s an equivalent in Go
package main
import "fmt"
func main() {
cats := 10
if cats > 10 {
fmt.Println("you have many cats")
} else {
fmt.Println("you have few cats")
}
cat_fur := "calico"
switch cat_fur {
case "tabby":
fmt.Println("tabby cat")
case "calico":
fmt.Println("calico cat")
default:
///
}
}
You can run the app in GoPlay Space.
You’ll notice the conditionals are a bit cleaner in Golang, with fewer brackets.
JavaScript has 3 loops: for loop
, while loop
, and a do while loop
. Here’s a for loop example.
// normal for loop
for (i = 0; i < 10; i++) {
console.log(i);
}
// key, value loop
for (var key in p) {
if (p.hasOwnProperty(key)) {
console.log(key + ' -> ' + p[key]);
}
}
// another key value loop
Object.keys(obj).forEach(function(key) {
console.log(key, obj[key]);
})
// there's also a `for...of`
Go has only one type of loop, and it’s the for
loop. Don’t let that deceive you though as the for
loop in Go is very versatile and can emulate almost any type of loop. Let’s look at a simple example:
package main
import (
"fmt"
)
func main() {
for i := 0; i < 10; i++ {
fmt.Println(i)
}
// key value pairs
kvs := map[string]string{
"name": "Scotch",
"website": "https://scotch.io",
}
for key, value := range kvs {
fmt.Println(key, value)
}
}
You can run the app in GoPlay Space.
Objects are a big part of JavaScript and exist in almost every program. Here’s an Object in JavaScript.
// an object
const Post = {
ID: 300
Title: "Moving from Node.js to Go",
Author: "Christopher Ganga",
Difficulty: "Beginner",
}
console.log(Post)
// access values
console.log(Post.ID)
console.log(Post.Title)
console.log(Post.Author)
// ....
// we can also define classes in javascript.
Since Go is statically typed, we need to do a little extra to define objects. There are two ways to do this, and it involves using map
. A map
is a key-value data structure, where the keys are a set (does not contain duplicates).
package main
import (
"fmt"
)
func main() {
Post := map[string]interface{}{
"ID": 300,
"Title": "Moving from Node.js to Go",
"Author": "Christopher Ganga",
"Difficulty": "Beginner",
}
fmt.Println(Post)
// to access values
fmt.Println(Post["ID"])
fmt.Println(Post["Title"])
fmt.Println(Post["Author"])
// ....
}
You can run this example in Goplay Space
The other way to write objects in Go is by using Structs. A struct is an abstract data structure, with properties and methods. It’s a close equivalent to a Class in Javascript.
package main
import (
"fmt"
)
type Post struct {
ID int
Title string
Author string
Difficulty string
}
func main() {
// create an instance of the Post
p := Post{
ID: 300,
Title: "Moving from Node.js to Go",
Author: "Christopher Ganga",
Difficulty: "Beginner",
}
fmt.Println(p)
// to access values
fmt.Println(p.ID)
fmt.Println(p.Title)
fmt.Println(p.Author)
// ....
}
Struct defines the name of the type and its properties together with the types. We can then create an instance of the type (Post).
Now that we know a little about the similarities and differences in language constructs, we can have a look at servers. Since we are coming from Node.js it’s likely that we’re building a server, that returns JSON for instance.
In Node.js, chances are while writing a server, you are using Express as the base library for your server. It’s the most common, comes with a router, and is the most battle-tested. Here’s a Node.js server.
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello from Express!');
})
app.listen(3000, err => {
if (err) {
return console.log('something bad happened', err);
}
console.log('`server is listening on 3000'`);
})
The Go standard library provides everything we need to get a server up and running without any external dependencies. The net/http
package provides most of this functionality.
When building larger applications, however, expanding on the base net/http
package with third-party packages is common, and one popular package that provides greatly a lot of functionality is the Gorilla Web Toolkit.
Here’s an equivalent Server in Go.
package main
import (
"net/http"
)
func Hello(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello World"))
}
func main() {
http.HandleFunc("/", Hello)
if err := http.ListenAndServe(":8080", nil); err != nil {
panic(err)
}
}
We call http.HandleFunc
and give it a route and a handler. Almost similar to the callback we give to express routes.
You can test this by running go run main.go
, assuming your file was named main.go
Now, let’s introduce a router library, because, if we don’t, we’ll have to test whether a request came as a POST
, GET
, or the likes, and use if statements to match specific routes. Something like this:
if req.Method == "POST" {
// do this
}
To get packages with Golang, you usually use a go get <github-link>
.
To get Gorilla Mux from the Gorilla Web Toolkit we mentioned earlier, we would write the following in our terminal:
- go get -u github.com/gorilla/mux
Then we are able to do this. I pulled this directly from Gorilla mux documentation.
package main
import (
"fmt"
"net/http"
"github.com/gorilla/mux"
)
func main() {
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
// details
r.HandleFunc("/products/{key}", ProductHandler)
r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler)
r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler)
http.Handle("/", r)
}
// example handler
func ArticlesCategoryHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
w.WriteHeader(http.StatusOK)
fmt.Fprintf(w, "Category: %v\n", vars["category"])
}
We see that the same way express accepts patterns, Gorilla mux allows us to use the same patterns. It can get a little complicated than what I’ve described, but I hope you get the idea.
Gorilla mux also comes with helper functions that npm’s body-parser
helps us achieve. Go purist can however claim that these functions can easily be written.
Middleware are a great part of Node.js servers.
They are functions that sit somewhere, and are run before or after the actual request is run. In Node.js, this is a simple snippet for a middleware that retrieves a secret from the env and uses it to authenticate a user. When reading variables from Node.js, dotenv is commonly used.
const express = require('express');
const app = express();
// add server_name middleware
function authenticate((req, res, next) => {
const secret = process.ENV.SECRET;
if (secret == "") {
return res.send("secret not found");
}
if (!isAuthenticated(req, secret)) {
return res.send("invalid authentication");
}
return next();
})
// use middleware
app.get('/', authenticate, (req, res) => {
res.send('Hello from Express!');
})
app.listen(3000, err => {
if (err) {
return console.log('something bad happened', err);
}
console.log('`server is listening on 3000'`);
})
Go takes a similar approach. Since all a middleware does is take in a request, do something with it, and decide whether the request should proceed.
package main
import (
"net/http"
"os"
)
// our sample authenticate middleware
func Authenticate(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r _http.Request) {
secret := os.Getenv('SECRET')
if secret == "" {
w.Write(_[_]byte("secret not found")
return
}
if !isAuthenticated(r, secret) {
w.Write(_[]byte("invalid authentication"))
return
}
next.ServeHTTP(w, r)
}
}
func Hello(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello World"))
}
func main() {
http.HandleFunc("/", Authenticate(Hello)) // call the middeware by wrapping
if err := http.ListenAndServe(":8080", nil); err != nil {
panic(err)
}
}
If you are a seasoned JavaScript developer, you’ve probably noticed functions are first-class citizens in Go, too.
We’ve just written a function that takes in a http.HandlerFunc
which is a function type, and the function structure is type HandlerFunc func(ResponseWriter, *Request)
, just like the Handler we wrote.
The advantage is that the http.HandlerFunc
type has a function ServeHTTP
which takes in the response and the request pointer we passed, and executes the handle call.
Some people call this approach wrapping functions
, but this is the general idea. Again, you can easily write your own middlewares, but there are a couple of libraries out there to help you like
Most of the time, our servers usually depend on an external API to get some data it needs. Let’s say for example we are getting users from Github.
This is the approach you would take in a Node.js app.
You’ll first install a HTTP request module, such as axios
.
- npm install axios
const axios = require('axios');
const url = 'https://api.github.com/users';
axios.get(url).then(res => {
// do something with the response
}).catch(err => {
// do something with the error
})
This piece of code can either be in your service or anywhere you like.
In Go, however, the net/http
package can help us with this scenario.
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
)
func main() {
URL := "https://api.github.com/users"
res, err := http.Get(URL)
if err != nil {
log.Println(err)
return
}
defer res.Body.Close() // for garbage collection
responseBodyBytes, err := ioutil.ReadAll(res.Body)
if err != nil {
log.Println(err)
return
}
fmt.Println(string(responseBodyBytes))
}
We use http.Get
to make requests, and check errors. The response is usually in bytes, and we have to read it first, then convert it to string with string([]bytes)
.
You can add this to a main.go
file and run go run main.go
.
This code can also easily be converted to a func
, and called repeatedly when needed.
Here’s an example showing various ways to use the http
package from the net/http
documentation page.
resp, err := http.Get("http://example.com/")
...
resp, err := http.Post("http://example.com/upload", "image/jpeg", &buf)
...
resp, err := http.PostForm("http://example.com/form",
url.Values{"key": {"Value"}, "id": {"123"}})
Node.js has various npm modules to help in database connections depending on the databases that you are using.
These libraries most of the time come with their own methods, that try to make it easier for you to work with them, and some even offer ORM-like features.
Go however takes a different approach. The standard library provides an interface for working with databases called database/sql
which RDBMS package authors can use to implement drivers for various databases. Authors following the standard interface ensure greater interoperability for their packages.
Common database driver packages include:
We all know how JavaScript has a lot of frameworks and libraries, and we use them occasionally to avoid reinventing the wheel.
The Go community however prefers using libraries instead of frameworks, so you will rarely find a team committing to a particular framework when building their applications.
That being said, there is one Framework I’d recommend since it is built by combining a lot of the commonly used packages and file structures.
Go’s files must be written within packages, and this usually affects your file structure.
In JavaScript, you’ll see a lot of require
statements at the beginning of files, while in Golang, the first line is always a package name, which will then be used as an import path in a file where it’s required import package_path/package_name
.
I hope you’ve gotten a gist of what it’s like to write Go, and you’d like to get into action. Golang is really praised for its concurrency and performance, and if you are building a large application, this would be a preferred choice.
One language is not better than the other. It’s about choosing the right tool for the job.
I’ve been following this transitional journey for a while now, and would gladly answer any questions you may have. Just leave a comment.
Happy Go-ing!
]]>I’m a big fan of speeding up every part of your development. If you shave off seconds here and there multiple times a day, you’ll save a ton of time over the course of a year.
This involves using the keyboard as often as possible and reaching for the mouse as little as possible. It’s a goal of mine to do an entire day without touching the mouse. Still haven’t gotten there.
Learning vim is a big part of being productive in your editor. Even putting vim in your browser with Vimium helps a ton.
Snippets are another way to save time on development. Simple React Snippets for Visual Studio Code by Burke Holland is a great way to speed up development.
Here’s imrc
expanded to import React, { Component } from 'react';
Simple React Snippets can be found in the VS Code Extension Marketplace.
Whenever starting a new React file, I’ll use the imr
snippet:
imr
Expands to:
import React from 'react'
And the imrc
snippet:
imrc
Expands to:
import React, { Component } from 'react'
After installing the VS Code Extension, you can use the snippets by typing the shortcut and hitting Tab
or Enter
.
Here are the ones I think are most helpful when starting new files:
imr
- Import Reactimport React from 'react';
imrc
- Import React and Componentimport React, { Component } from 'react';
cc
- Make a Class Component and exportclass | extends Component {
state = { | },
render() {
return ( | );
}
}
export default |;
sfc
- Make a stateless function componentconst | = props => {
return ( | );
};
export default |;
cdm
- componentDidMountcomponentDidMount() {
|
}
cdu
- componentDidUpdatecomponentDidUpdate(prevProps, prevState) {
|
}
ss
- setStatethis.setState({ | : | });
ren
- renderrender() {
return (
|
);
}
There are a few more snippets that you can use that you can find on the official page.
]]>In Part 1 of this series, we learned how to create a RESTful API the TDD way. We covered writing tests and learned a lot about Flask. If you haven’t read Part 1, please do because this tutorial will build upon it.
In this part of the series, we’ll learn how to authenticate and authorize users in our API.
In this tutorial, we’ll talk about securing our API with token-based authentication and user authorization. We will integrate users into the API we built in Part 1.
In order to get started, ensure your virtual environment is activated.
We intend to allow bucketlists to be owned by users. For now, anyone can manipulate a bucketlist even if they did not create it. We’ve got to fix this security hole.
How do we keep track of users, you ask? We define a model.
# app/models.py
from app import db
from flask_bcrypt import Bcrypt
class User(db.Model):
"""This class defines the users table """
__tablename__ = 'users'
# Define the columns of the users table, starting with the primary key
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(256), nullable=False, unique=True)
password = db.Column(db.String(256), nullable=False)
bucketlists = db.relationship(
'Bucketlist', order_by='Bucketlist.id', cascade="all, delete-orphan")
def __init__(self, email, password):
"""Initialize the user with an email and a password."""
self.email = email
self.password = Bcrypt().generate_password_hash(password).decode()
def password_is_valid(self, password):
"""
Checks the password against it's hash to validates the user's password
"""
return Bcrypt().check_password_hash(self.password, password)
def save(self):
"""Save a user to the database.
This includes creating a new user and editing one.
"""
db.session.add(self)
db.session.commit()
class Bucketlist(db.Model):
"""This class defines the bucketlist table."""
__tablename__ = 'bucketlists'
# define the columns of the table, starting with its primary key
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
date_created = db.Column(db.DateTime, default=db.func.current_timestamp())
date_modified = db.Column(
db.DateTime, default=db.func.current_timestamp(),
onupdate=db.func.current_timestamp())
created_by = db.Column(db.Integer, db.ForeignKey(User.id))
def __init__(self, name, created_by):
"""Initialize the bucketlist with a name and its creator."""
self.name = name
self.created_by = created_by
def save(self):
"""Save a bucketlist.
This applies for both creating a new bucketlist
and updating an existing onupdate
"""
db.session.add(self)
db.session.commit()
@staticmethod
def get_all(user_id):
"""This method gets all the bucketlists for a given user."""
return Bucketlist.query.filter_by(created_by=user_id)
def delete(self):
"""Deletes a given bucketlist."""
db.session.delete(self)
db.session.commit()
def __repr__(self):
"""Return a representation of a bucketlist instance."""
return "<Bucketlist: {}>".format(self.name)
Here’s what we’ve done:
One-to-Many
relationship between the two tables. We defined this relationship by adding the db.relationship() function on the User table (parent table)cascade="all, delete-orphan"
will delete all bucketlists when a referenced user is deleted.generate_password_hash(password)
. This will make our users’ passwords be secure from dictionary and brute force attacks.get_all()
method to get all the bucketlists for a given user.Don’t forget to install Flask-Bcrypt
- pip install flask-bcrypt
Migrate the changes we’ve just made to the database we initially created in Part 1 of the series.
- python manage.py db migrate
- python manage.py db upgrade
Now we have a user table to keep track of registered users.
Our app will have many tests from now on. It’s best practice to have a test folder that will house all our tests. We’ll create a folder called tests
. Inside this folder, we’ll move our test_bucketlists.py
file into it.
Our directory structure should now look like this:
- ├── bucketlist
- ├── app
- │ ├── __init__.py
- │ └── models.py
- ├── instance
- │ ├── __init__.py
- │ └── config.py
- ├── manage.py
- ├── requirements.txt
- ├── run.py
- ├── tests
- │ └── test_bucketlist.py
Also, we’ll edit the manage.py
as follows:
import os
import unittest
# class for handling a set of commands
from flask_script import Manager
from flask_migrate import Migrate, MigrateCommand
from app import db, create_app
# initialize the app with all its configurations
app = create_app(config_name=os.getenv('APP_SETTINGS'))
migrate = Migrate(app, db)
# create an instance of class that will handle our commands
manager = Manager(app)
# Define the migration command to always be preceded by the word "db"
# Example usage: python manage.py db init
manager.add_command('db', MigrateCommand)
# define our command for testing called "test"
# Usage: python manage.py test
@manager.command
def test():
"""Runs the unit tests without test coverage."""
tests = unittest.TestLoader().discover('./tests', pattern='test*.py')
result = unittest.TextTestRunner(verbosity=2).run(tests)
if result.wasSuccessful():
return 0
return 1
if __name__ == '__main__':
manager.run()
The decorator on top of test()
allows us to define a command called test
. Inside the function, we load the tests from the tests folder using the TestLoader()
class and then run them with TextTestRunner.run()
. If it’s successful, we exit gracefully with a return 0
.
Let’s test it out on our terminal.
- python manage.py test
The tests should fail. This is because we’ve not modified our code to work with the new changes in the model.
From now on, we’ll use this command to run our tests.
Token-based authentication is a security technique that authenticates users who attempt to login to a server using a security token provided by the server. Without the token, a user won’t be granted access to restricted resources. You can find more intricate details about token-based authentication here
For us to implement this authentication, we’ll use a Python package called PyJWT. PyJWT allows us to encode and decode JSON Web Tokens (JWT).
That being said, let’s install it:
- pip install PyJWT
For our users to authenticate, the access token is going to be placed in the Authorization HTTP header in all our bucketlist requests.
Here’s how the header looks like:
Authorization: "Bearer <The-access-token-is-here>"
We’ll put the word Bearer
before the token and separate them with a space character.
Don’t forget the space in between the Bearer
and the token.
We need to create a way to encode the token before it’s sent to the user. We also need to have a way to decode the token when the user sends it via the Authorization header.
In our model.py
we’ll create a function inside our User model
to generate the token and another one to decode it. Let’s add the following code:
# /app/models.py
## previous imports ###
import jwt
from datetime import datetime, timedelta
class User(db.Model):
"""Maps to users table """
__tablename__ = 'users'
###########################################
## Existing code for defining table columns is here ##
###########################################
def __init__(self, email, password):
#### INIT CODE LIES HERE ###################
###########################################
def password_is_valid(self, password):
##### PASSWORD CHECK CODE LIES HERE ####
###########################################
def save(self):
######### CODE FOR SAVING USER LIES HERE ##
############################################
def generate_token(self, user_id):
""" Generates the access token"""
try:
# set up a payload with an expiration time
payload = {
'exp': datetime.utcnow() + timedelta(minutes=5),
'iat': datetime.utcnow(),
'sub': user_id
}
# create the byte string token using the payload and the SECRET key
jwt_string = jwt.encode(
payload,
current_app.config.get('SECRET'),
algorithm='HS256'
)
return jwt_string
except Exception as e:
# return an error in string format if an exception occurs
return str(e)
@staticmethod
def decode_token(token):
"""Decodes the access token from the Authorization header."""
try:
# try to decode the token using our SECRET variable
payload = jwt.decode(token, current_app.config.get('SECRET'))
return payload['sub']
except jwt.ExpiredSignatureError:
# the token is expired, return an error string
return "Expired token. Please login to get a new token"
except jwt.InvalidTokenError:
# the token is invalid, return an error string
return "Invalid token. Please register or login"
The generate_token()
takes in a user ID as an argument, uses jwt
to create a token using the secret key, and makes it time-based by defining its expiration time. The token is valid for 5 minutes as specified in the timedelta. You can set it to your liking.
The decode_token()
takes in a token as an argument and checks whether the token is valid. If it is, it returns the user ID as the payload. It returns an error message if the token is expired or invalid.
Don’t forget to import jwt
and the datetime
above.
Our app is growing bigger. We’ll have to organize it into components. Flask uses the concept of Blueprints to make application components.
Blueprints are simply a set of operations that can be registered on a given app. Think of it as an extension of the app that can address a specific functionality.
We’ll create an authentication blueprint. This blueprint will focus on handling user registration and logins.
Inside our /app
directory create a folder and call it auth
.
Our auth
folder should contain:
__init__.py
fileviews.py
fileIn our auth/__init__.py
file, initialize a blueprint.
# auth/__init__.py
from flask import Blueprint
# This instance of a Blueprint that represents the authentication blueprint
auth_blueprint = Blueprint('auth', __name__)
from . import views
Then import the blueprint and register it at the bottom of the app/__init__.py
, just before the return app
line.
# app/__init__.py
# imports lie here
def create_app(config_name):
#####################################################
### Existing code for intializing the app with its configurations ###
#####################################################
@app.route('/bucketlists/<int:id>', methods=['GET', 'PUT', 'DELETE'])
def bucketlist_manipulation(id, **kwargs):
#########################################################
### Existing code for creating, updating and deleting a bucketlist #####
#########################################################
...
# import the authentication blueprint and register it on the app
from .auth import auth_blueprint
app.register_blueprint(auth_blueprint)
return app
Testing should never be an afterthought. It should always come first.
We’re going to add a new test file that will house all our tests for the authentication blueprint. It’ll test whether our API can handle user registration, user log in, and access-token generation.
In our tests
directory, create a file naming it test_auth.py
. Write the following code in it:
# /tests/test_auth.py
import unittest
import json
from app import create_app, db
class AuthTestCase(unittest.TestCase):
"""Test case for the authentication blueprint."""
def setUp(self):
"""Set up test variables."""
self.app = create_app(config_name="testing")
# initialize the test client
self.client = self.app.test_client
# This is the user test json data with a predefined email and password
self.user_data = {
'email': 'test@example.com',
'password': 'test_password'
}
with self.app.app_context():
# create all tables
db.session.close()
db.drop_all()
db.create_all()
def test_registration(self):
"""Test user registration works correcty."""
res = self.client().post('/auth/register', data=self.user_data)
# get the results returned in json format
result = json.loads(res.data.decode())
# assert that the request contains a success message and a 201 status code
self.assertEqual(result['message'], "You registered successfully.")
self.assertEqual(res.status_code, 201)
def test_already_registered_user(self):
"""Test that a user cannot be registered twice."""
res = self.client().post('/auth/register', data=self.user_data)
self.assertEqual(res.status_code, 201)
second_res = self.client().post('/auth/register', data=self.user_data)
self.assertEqual(second_res.status_code, 202)
# get the results returned in json format
result = json.loads(second_res.data.decode())
self.assertEqual(
result['message'], "User already exists. Please login.")
We’ve initialized our test with a test client for making requests to our API and some test data.
The first test function test_registration()
sends a post request to /auth/register
and tests the response it gets. It ensures that the status code is 201, meaning we’ve successfully created a user.
The second test function tests whether the API can only register a user once. Having duplicates in the database is bad for business.
Now let’s run the tests using python manage.py test
. The tests should fail.
- ----------------------------------------------------------------------
- raise JSONDecodeError("Expecting value", s, err.value) from None
- json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The reason our tests fail is simply because we lack the functionality they need to test. Let’s implement something that’ll make these two tests pass.
Open up the views.py
file and add the following code:
# /app/auth/views.py
from . import auth_blueprint
from flask.views import MethodView
from flask import make_response, request, jsonify
from app.models import User
class RegistrationView(MethodView):
"""This class registers a new user."""
def post(self):
"""Handle POST request for this view. Url ---> /auth/register"""
# Query to see if the user already exists
user = User.query.filter_by(email=request.data['email']).first()
if not user:
# There is no user so we'll try to register them
try:
post_data = request.data
# Register the user
email = post_data['email']
password = post_data['password']
user = User(email=email, password=password)
user.save()
response = {
'message': 'You registered successfully. Please log in.'
}
# return a response notifying the user that they registered successfully
return make_response(jsonify(response)), 201
except Exception as e:
# An error occured, therefore return a string message containing the error
response = {
'message': str(e)
}
return make_response(jsonify(response)), 401
else:
# There is an existing user. We don't want to register users twice
# Return a message to the user telling them that they they already exist
response = {
'message': 'User already exists. Please login.'
}
return make_response(jsonify(response)), 202
registration_view = RegistrationView.as_view('register_view')
# Define the rule for the registration url ---> /auth/register
# Then add the rule to the blueprint
auth_blueprint.add_url_rule(
'/auth/register',
view_func=registration_view,
methods=['POST'])
Here’s what we have added:
make_response
(for returning our response) and jsonify
(for encoding our data in JSON and adding an application/json
header to the response)POST
request to our post()
method.post()
method, we check if the user exists in our database. If they don’t, we create a new user and return a message to them notifying their successful registration.
If the user exists they are reminded to log in.as_view()
method to make our class-based view callable so that it can take a request and return a response. We then defined the URL for registering a user as /auth/register
.Let’s run our tests once more. Only the AuthTestCase tests should pass. The bucketlist tests still fail because we haven’t modified the __init__.py
code.
- test_already_registered_user (test_auth.AuthTestCase)
- Test that a user cannot be registered twice. ... ok
- test_registration (test_auth.AuthTestCase)
- Test user registration works correcty. ... ok
-
- Bucketlist failed tests fall here
- ----------------------------------------------------------------------
We’ll test our registration functionality by making a request using Postman.
But before we make the requests, ensure the API is up and running.
- python run.py development
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 225-021-817
Now you can make a POST request to localhost:5000/auth/register
. Specify an email and a password of your choice to represent the user we are registering. Click send.
A user will have to log in to gain access to our API. Currently, we are lacking this login functionality. Let’s start with some tests. We’ll add two more tests at the bottom of our test_auth.py
as follows:
# tests/test_auth.py
class AuthTestCase(unittest.TestCase):
"""Test case for the authentication blueprint."""
def setUp(self):
#### EXISTING CODE FOR SETUP LIES HERE ####
def test_registration(self):
#### EXISTING TEST CODE LIES HERE ####
def test_already_registered_user(self):
### EXISTING TEST CODE LIES HERE #####
def test_user_login(self):
"""Test registered user can login."""
res = self.client().post('/auth/register', data=self.user_data)
self.assertEqual(res.status_code, 201)
login_res = self.client().post('/auth/login', data=self.user_data)
# get the results in json format
result = json.loads(login_res.data.decode())
# Test that the response contains success message
self.assertEqual(result['message'], "You logged in successfully.")
# Assert that the status code is equal to 200
self.assertEqual(login_res.status_code, 200)
self.assertTrue(result['access_token'])
def test_non_registered_user_login(self):
"""Test non registered users cannot login."""
# define a dictionary to represent an unregistered user
not_a_user = {
'email': 'not_a_user@example.com',
'password': 'nope'
}
# send a POST request to /auth/login with the data above
res = self.client().post('/auth/login', data=not_a_user)
# get the result in json
result = json.loads(res.data.decode())
# assert that this response must contain an error message
# and an error status code 401(Unauthorized)
self.assertEqual(res.status_code, 401)
self.assertEqual(
result['message'], "Invalid email or password, Please try again")
The test_user_login()
function tests whether our API can successfully log in as a registered user. It also tests for the access token.
The other test function test_non_registered_user_login()
tests whether our API can restrict signing in to only registered users.
Again, we’ll make the tests pass by implementing its functionality. Let’s create the login view.
from . import auth_blueprint
from flask.views import MethodView
from flask import Blueprint, make_response, request, jsonify
from app.models import User
class RegistrationView(MethodView):
"""This class-based view registers a new user."""
#### EXISTING REGISTRATION CODE HERE ####
##########################################
class LoginView(MethodView):
"""This class-based view handles user login and access token generation."""
def post(self):
"""Handle POST request for this view. Url ---> /auth/login"""
try:
# Get the user object using their email (unique to every user)
user = User.query.filter_by(email=request.data['email']).first()
# Try to authenticate the found user using their password
if user and user.password_is_valid(request.data['password']):
# Generate the access token. This will be used as the authorization header
access_token = user.generate_token(user.id)
if access_token:
response = {
'message': 'You logged in successfully.',
'access_token': access_token.decode()
}
return make_response(jsonify(response)), 200
else:
# User does not exist. Therefore, we return an error message
response = {
'message': 'Invalid email or password, Please try again'
}
return make_response(jsonify(response)), 401
except Exception as e:
# Create a response containing an string error message
response = {
'message': str(e)
}
# Return a server error using the HTTP Error Code 500 (Internal Server Error)
return make_response(jsonify(response)), 500
# Define the API resource
registration_view = RegistrationView.as_view('registration_view')
login_view = LoginView.as_view('login_view')
# Define the rule for the registration url ---> /auth/register
# Then add the rule to the blueprint
auth_blueprint.add_url_rule(
'/auth/register',
view_func=registration_view,
methods=['POST'])
# Define the rule for the registration url ---> /auth/login
# Then add the rule to the blueprint
auth_blueprint.add_url_rule(
'/auth/login',
view_func=login_view,
methods=['POST']
)
Here, we’ve defined a class-based view just like we did in the registration section.
It dispatches the POST
request to the post()
method as well. This is to capture the user credentials (email, password) when they log in. It checks whether the password given is valid, generates an access token for the user, and returns a response containing the token.
We’ve also handled exceptions gracefully so that if one occurs, our API will continue running and won’t crush.
Finally, we defined a URL for the login route.
Make a POST request. Input the email and password we specified for the user during registration. Click send. You should get an access token in the JSON response.
If you run the tests, you will notice that the login tests pass, but the bucketlist one still fails. It’s time to refactor these tests.
First, we’ll create two helper functions for registering and signing in to our test user.
# tests/test_bucketlist.py
class BucketlistTestCase(unittest.TestCase):
"""This class represents the bucketlist test case"""
def setUp(self):
"""Set up test variables."""
#### SETUP VARIABLES ARE HERE #####
####################################
def register_user(self, email="user@test.com", password="test1234"):
"""This helper method helps register a test user."""
user_data = {
'email': email,
'password': password
}
return self.client().post('/auth/register', data=user_data)
def login_user(self, email="user@test.com", password="test1234"):
"""This helper method helps log in a test user."""
user_data = {
'email': email,
'password': password
}
return self.client().post('/auth/login', data=user_data)
############################################
##### ALL OUR TESTS METHODS LIE HERE #######
# Make the tests conveniently executable
if __name__ == "__main__":
unittest.main()
We do this so that when we want to register or log in as a test user (which is in all the tests), we don’t have to repeat ourselves. We’ll simply call the function and we are set.
Next, we’ll define a way to get the access token and add it to the Authorization header in all our client requests. Here’s a code snippet of how we’re going to do it.
def test_bucketlist_creation(self):
"""Test the API can create a bucketlist (POST request)"""
# register a test user, then log them in
self.register_user():
result = self.login_user()
# obtain the access token
access_token = json.loads(result.data.decode())['access_token']
# ensure the request has an authorization header set with the access token in it
res = self.client().post(
'/bucketlists/',
headers=dict(Authorization="Bearer " + access_token),
data=self.bucketlist)
We can now go ahead and refactor the whole test_bucketlist.py
file. After refactoring all our requests, we should have something like this:
import unittest
import os
import json
from app import create_app, db
class BucketlistTestCase(unittest.TestCase):
"""This class represents the bucketlist test case"""
def setUp(self):
"""Define test variables and initialize app."""
self.app = create_app(config_name="testing")
self.client = self.app.test_client
self.bucketlist = {'name': 'Go to Borabora for vacay'}
# binds the app to the current context
with self.app.app_context():
# create all tables
db.session.close()
db.drop_all()
db.create_all()
def register_user(self, email="user@test.com", password="test1234"):
user_data = {
'email': email,
'password': password
}
return self.client().post('/auth/register', data=user_data)
def login_user(self, email="user@test.com", password="test1234"):
user_data = {
'email': email,
'password': password
}
return self.client().post('/auth/login', data=user_data)
def test_bucketlist_creation(self):
"""Test API can create a bucketlist (POST request)"""
self.register_user()
result = self.login_user()
access_token = json.loads(result.data.decode())['access_token']
# create a bucketlist by making a POST request
res = self.client().post(
'/bucketlists/',
headers=dict(Authorization="Bearer " + access_token),
data=self.bucketlist)
self.assertEqual(res.status_code, 201)
self.assertIn('Go to Borabora', str(res.data))
def test_api_can_get_all_bucketlists(self):
"""Test API can get a bucketlist (GET request)."""
self.register_user()
result = self.login_user()
access_token = json.loads(result.data.decode())['access_token']
# create a bucketlist by making a POST request
res = self.client().post(
'/bucketlists/',
headers=dict(Authorization="Bearer " + access_token),
data=self.bucketlist)
self.assertEqual(res.status_code, 201)
# get all the bucketlists that belong to the test user by making a GET request
res = self.client().get(
'/bucketlists/',
headers=dict(Authorization="Bearer " + access_token),
)
self.assertEqual(res.status_code, 200)
self.assertIn('Go to Borabora', str(res.data))
def test_api_can_get_bucketlist_by_id(self):
"""Test API can get a single bucketlist by using it's id."""
self.register_user()
result = self.login_user()
access_token = json.loads(result.data.decode())['access_token']
rv = self.client().post(
'/bucketlists/',
headers=dict(Authorization="Bearer " + access_token),
data=self.bucketlist)
# assert that the bucketlist is created
self.assertEqual(rv.status_code, 201)
# get the response data in json format
results = json.loads(rv.data.decode())
result = self.client().get(
'/bucketlists/{}'.format(results['id']),
headers=dict(Authorization="Bearer " + access_token))
# assert that the bucketlist is actually returned given its ID
self.assertEqual(result.status_code, 200)
self.assertIn('Go to Borabora', str(result.data))
def test_bucketlist_can_be_edited(self):
"""Test API can edit an existing bucketlist. (PUT request)"""
self.register_user()
result = self.login_user()
access_token = json.loads(result.data.decode())['access_token']
# first, we create a bucketlist by making a POST request
rv = self.client().post(
'/bucketlists/',
headers=dict(Authorization="Bearer " + access_token),
data={'name': 'Eat, pray and love'})
self.assertEqual(rv.status_code, 201)
# get the json with the bucketlist
results = json.loads(rv.data.decode())
# then, we edit the created bucketlist by making a PUT request
rv = self.client().put(
'/bucketlists/{}'.format(results['id']),
headers=dict(Authorization="Bearer " + access_token),
data={
"name": "Dont just eat, but also pray and love :-)"
})
self.assertEqual(rv.status_code, 200)
# finally, we get the edited bucketlist to see if it is actually edited.
results = self.client().get(
'/bucketlists/{}'.format(results['id']),
headers=dict(Authorization="Bearer " + access_token))
self.assertIn('Dont just eat', str(results.data))
def test_bucketlist_deletion(self):
"""Test API can delete an existing bucketlist. (DELETE request)."""
self.register_user()
result = self.login_user()
access_token = json.loads(result.data.decode())['access_token']
rv = self.client().post(
'/bucketlists/',
headers=dict(Authorization="Bearer " + access_token),
data={'name': 'Eat, pray and love'})
self.assertEqual(rv.status_code, 201)
# get the bucketlist in json
results = json.loads(rv.data.decode())
# delete the bucketlist we just created
res = self.client().delete(
'/bucketlists/{}'.format(results['id']),
headers=dict(Authorization="Bearer " + access_token),)
self.assertEqual(res.status_code, 200)
# Test to see if it exists, should return a 404
result = self.client().get(
'/bucketlists/1',
headers=dict(Authorization="Bearer " + access_token))
self.assertEqual(result.status_code, 404)
# Make the tests conveniently executable
if __name__ == "__main__":
unittest.main()
We’ll refactor the methods that handle the HTTP requests for bucketlist creation and getting all the bucketlists. Open up /app/__init__.py
file and edit as follows:
# /app/__init__.py
## imports ##
from flask import request, jsonify, abort, make_response
def create_app(config_name):
from models import Bucketlist, User
###########################################
### EXISTING APP CONFIG CODE LIES HERE ###
###########################################
@app.route('/bucketlists/', methods=['POST', 'GET'])
def bucketlists():
# Get the access token from the header
auth_header = request.headers.get('Authorization')
access_token = auth_header.split(" ")[1]
if access_token:
# Attempt to decode the token and get the User ID
user_id = User.decode_token(access_token)
if not isinstance(user_id, str):
# Go ahead and handle the request, the user is authenticated
if request.method == "POST":
name = str(request.data.get('name', ''))
if name:
bucketlist = Bucketlist(name=name, created_by=user_id)
bucketlist.save()
response = jsonify({
'id': bucketlist.id,
'name': bucketlist.name,
'date_created': bucketlist.date_created,
'date_modified': bucketlist.date_modified,
'created_by': user_id
})
return make_response(response), 201
else:
# GET all the bucketlists created by this user
bucketlists = Bucketlist.query.filter_by(created_by=user_id)
results = []
for bucketlist in bucketlists:
obj = {
'id': bucketlist.id,
'name': bucketlist.name,
'date_created': bucketlist.date_created,
'date_modified': bucketlist.date_modified,
'created_by': bucketlist.created_by
}
results.append(obj)
return make_response(jsonify(results)), 200
else:
# user is not legit, so the payload is an error message
message = user_id
response = {
'message': message
}
return make_response(jsonify(response)), 401
We first added two imports: the User
model and the make_response
from Flask.
In the bucketlist function, we check for the authorization header from the request and extract the access token. Then, we decoded the token using User.decode_token(token)
to give us the payload. The payload is expected to be a user ID if the token is valid and not expired. If the token is not valid or expired, the payload will be an error message as a string.
Copy the token and paste it to the header section, creating an Authorization header. Don’t forget to put the word Bearer before the token with a space separating them like this:
Authorization: "Bearer dfg32r22349r40eiwoijr232394029wfopi23r2.2342..."
Make a POST request to localhost:5000/bucketlists/
, specifying the name of the bucketlist. Click send.
Ensure you’ve set the Authorization header just as we did for the POST request.
Make a GET request to localhost:5000/bucketlists/
and retrieve all the bucketlists our user just created.
We’ll refactor the PUT
and DELETE
functionality the same way we tackled the GET
and POST
.
# /app/__init__.py
## imports ##
from flask import request, jsonify, abort, make_response
def create_app(config_name):
from models import Bucketlist, User
############################################################
### Existing code for initializing the app with its configurations lies here ###
############################################################
@app.route('/bucketlists/', methods=['POST', 'GET'])
def bucketlists():
#### CODE FOR GET and POST LIES HERE#####
###############################
@app.route('/bucketlists/<int:id>', methods=['GET', 'PUT', 'DELETE'])
def bucketlist_manipulation(id, **kwargs):
# get the access token from the authorization header
auth_header = request.headers.get('Authorization')
access_token = auth_header.split(" ")[1]
if access_token:
# Get the user id related to this access token
user_id = User.decode_token(access_token)
if not isinstance(user_id, str):
# If the id is not a string(error), we have a user id
# Get the bucketlist with the id specified from the URL (<int:id>)
bucketlist = Bucketlist.query.filter_by(id=id).first()
if not bucketlist:
# There is no bucketlist with this ID for this User, so
# Raise an HTTPException with a 404 not found status code
abort(404)
if request.method == "DELETE":
# delete the bucketlist using our delete method
bucketlist.delete()
return {
"message": "bucketlist {} deleted".format(bucketlist.id)
}, 200
elif request.method == 'PUT':
# Obtain the new name of the bucketlist from the request data
name = str(request.data.get('name', ''))
bucketlist.name = name
bucketlist.save()
response = {
'id': bucketlist.id,
'name': bucketlist.name,
'date_created': bucketlist.date_created,
'date_modified': bucketlist.date_modified,
'created_by': bucketlist.created_by
}
return make_response(jsonify(response)), 200
else:
# Handle GET request, sending back the bucketlist to the user
response = {
'id': bucketlist.id,
'name': bucketlist.name,
'date_created': bucketlist.date_created,
'date_modified': bucketlist.date_modified,
'created_by': bucketlist.created_by
}
return make_response(jsonify(response)), 200
else:
# user is not legit, so the payload is an error message
message = user_id
response = {
'message': message
}
# return an error response, telling the user he is Unauthorized
return make_response(jsonify(response)), 401
# import the authentication blueprint and register it on the app
from .auth import auth_blueprint
app.register_blueprint(auth_blueprint)
return app
Running python manage.py test
should now yield passing tests.
- test_already_registered_user (test_auth.AuthTestCase)
- Test that a user cannot be registered twice. ... ok
- test_non_registered_user_login (test_auth.AuthTestCase)
- Test non registered users cannot login. ... ok
- test_registration (test_auth.AuthTestCase)
- Test user registration works correcty. ... ok
- test_user_login (test_auth.AuthTestCase)
- Test registered user can login. ... ok
- test_api_can_get_all_bucketlists (test_bucketlist.BucketlistTestCase)
- Test API can get a bucketlist (GET request). ... ok
- test_api_can_get_bucketlist_by_id (test_bucketlist.BucketlistTestCase)
- Test API can get a single bucketlist by using it's id. ... ok
- test_bucketlist_can_be_edited (test_bucketlist.BucketlistTestCase)
- Test API can edit an existing bucketlist. (PUT request) ... ok
- test_bucketlist_creation (test_bucketlist.BucketlistTestCase)
- Test API can create a bucketlist (POST request) ... ok
- test_bucketlist_deletion (test_bucketlist.BucketlistTestCase)
- Test API can delete an existing bucketlist. (DELETE request). ... ok
-
- ----------------------------------------------------------------------
- Ran 9 tests in 1.579s
-
- OK
Now let’s test to see if it works on Postman.
Fire up the API using python run.py development
Make a GET request for a single bucketlist to localhost:5000/bucketlists/2
Feel free to play around with the PUT and DELETE functionality.
We’ve covered quite a lot on securing our API. We went through defining a user model and integrating users into our API. We also covered token-based authentication and used an authentication blueprint to implement it.
Even though our main focus is to write the code, we should not let testing be an afterthought. For us to improve on code quality, there have to be tests. Testing is the secret to increasing the agility of your product development. In everything project you do, put TTD first.
If you’ve coded this to the end, you are awesome!
Feel free to recommend this to friends and colleagues.
]]>Materialize is a Responsive CSS Framework based on Google’s Material Design Language. In this tutorial I will explain what material design is and then we will build a portfolio website using Materialize. I will also compare Materialize with other popular CSS frameworks like Foundation and Bootstrap.
Material Design is a Design Language that challenges to create a visual language for users that synthesizes the classic principles of good design with the innovation and possibility of technology and science.
Other competitive design languages are flat design, metro design, realism design etc. Material differs from them on the basics of color schemes, shapes, patterns, textures, or layouts. Material is the only design language that adds motion and depth to elements.
In material design, everything should have a certain z-depth that determines how far raised or close to the page the element is.
As the user interacts with the design, due to motion, the design transforms and reorganizes itself on a continuity fashion.
According to the official website, “Materialize is a modern responsive front-end framework based on Material Design”. So it’s just one of the many CSS frameworks like Bootstrap, Foundation etc.
The difference between Materialize, Bootstrap, and Foundation is that Materialize is based on Google’s Material Design language where as bootstrap and foundation are based on the mobile first design language and flat design language, respectively.
Materialize provides all CSS and JS components that are provided by bootstrap and foundation.
You can download Materialize CSS and JS files from Materialize Download Page. Now create index.html
and css/style.css
files. And finally create an images
directory where the images for our project will be kept in.
Here is how our project directory will look:
----- css/
---------- materialize.min.css
---------- style.css
----- js/
---------- materialize.min.js
----- images/
- index.html
Here is the starting code in our index.html
file. Here we are loading the Materialize CSS and JS library and, also, our custom style.css
file.
<html>
<head>
<title>Materialize CSS Framework Demo</title>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<link type="text/css" rel="stylesheet" href="css/materialize.min.css" media="screen,projection"/>
<link type="text/css" rel="stylesheet" href="css/style.css">
</head>
<body>
<!-- jQuery is required by Materialize to function -->
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script type="text/javascript" src="js/materialize.min.js"></script>
<script type="text/javascript">
//custom JS code
</script>
</body>
</html>
Material Design is based on some predefined colors. Materialize provides classes to provide those colors to font and background.
Here is an example:
BACKGROUND COLOR
<div class="card-panel teal lighten-2">This is a card panel with a teal lighten-2 class</div>
TEXT COLOR
<div class="card-panel">
<span class="blue-text text-darken-2">This is a card panel with dark blue text</span>
</div>
See the Pen yyMjVO by Narayan Prusty (@qnimate) on CodePen.
Materialize uses the standard 12 column fluid responsive grid system.
The .container
class is not strictly part of the grid, but it is important in laying out content. It allows you to center your page content. The container class is set to ~70% of the window width. It helps you center and contain your page content. We use the container to contain our body content.
.row
class holds the grid. The .s
, .m
and .l
classes are used to define the width of columns for small, medium, and large screens.
Here is an example:
<div class="container">
<div class="row">
<div class="blue lighten-5 col s12 m1 l1">1</div>
<div class="blue lighten-4 col s12 m1 l1">1</div>
<div class="blue lighten-3 col s12 m1 l1">1</div>
<div class="blue lighten-2 col s12 m2 l2">2</div>
<div class="blue lighten-1 col s12 m3 l3">2</div>
<div class="blue col s12 m4 l4">2</div>
</div>
</div>
See the Pen xbqjrP by Narayan Prusty (@qnimate) on CodePen.
Complete Materialize Color Palette
SideNav is a navigation that works on all widths. It toggles from the left/right side of viewport.
Here is an example:
<ul id="slide-out" class="side-nav full">
<li><a href="#!">First Sidebar Link</a></li>
<li><a href="#!">Second Sidebar Link</a></li>
</ul>
<a href="#" data-activates="slide-out" class="button-collapse">
<i class="large mdi-navigation-menu"></i>
</a>
See the Pen zxZLGg by Narayan Prusty (@qnimate) on CodePen.
You can easily, vertically center things by adding the class valign-wrapper
to the container, holding the items you want to vertically align.
<div class="box valign-wrapper">
<h5>Vertical</h5>
</div>
See the Pen rayvgp by Narayan Prusty (@qnimate) on CodePen.
In material design, everything should have a certain z-depth that determines how far raised or close to the page the element is.
You can easily apply this shadow effect by adding a class=“z-depth-n” to an HTML tag.
<p class="z-depth-1">z-depth-1</p>
<p class="z-depth-2">z-depth-2</p>
<p class="z-depth-3">z-depth-3</p>
<p class="z-depth-4">z-depth-4</p>
<p class="z-depth-5">z-depth-5</p>
See the Pen vExrBq by Narayan Prusty (@qnimate) on CodePen.
There are two main button types described in material design. The raised button is a standard button that signify actions and seeks to give depth to a mostly flat page. The floating, circular action button is meant for very important functions.
<a class="waves-effect waves-light btn">Stuff</a>
<a class="waves-effect waves-light btn"><i class="mdi-file-cloud left"></i>button</a>
<a class="waves-effect waves-light btn"><i class="mdi-file-cloud right"></i>button</a>
<a class="btn-floating btn-large waves-effect waves-light red"><i class="mdi-content-add"></i></a>
See the Pen GgWGJg by Narayan Prusty (@qnimate) on CodePen.
Forms are the standard way to receive user inputted data. The transitions and smoothness of these elements are very important because of the inherent user interaction associated with forms.
Text fields allow user input. The border should light up simply and clearly indicating which field the user is currently editing. You must have an .input-field
div wrapping your input and label. This helps our jQuery animate the label. This is only used in our Input and Textarea to form elements.
If you don’t want the Green and Red validation states, just remove the validate
class from your inputs.
<div class="row">
<form class="col s12">
<div class="row">
<div class="input-field col s6">
<input id="first_name" type="text" class="validate">
<label for="first_name">First Name</label>
</div>
<div class="input-field col s6">
<input id="last_name" type="text" class="validate">
<label for="last_name">Last Name</label>
</div>
</div>
<div class="row">
<div class="input-field col s12">
<input id="username" type="text" class="validate">
<label for="username">Username</label>
</div>
</div>
<div class="row">
<div class="input-field col s12">
<input id="password" type="password" class="validate">
<label for="password">Password</label>
</div>
</div>
<div class="row">
<div class="input-field col s12">
<input id="email" type="email" class="validate">
<label for="email">Email</label>
</div>
</div>
</form>
</div>
See the Pen PwpabN by Narayan Prusty (@qnimate) on CodePen.
Materialize has included 740 Material Design Icons, courtesy of Google. The icons font file is embedded into the Materialize CSS file in form of Data URI. Icon classes use pseudo elements to select the HTML element and use generated content to populate it with icons using the icon’s UNICODE.
To use these icons, just place the name of the icon into the class of an HTML tag
To control the size of the icon, change the font-size
property of your icon. Optionally, you can use small
, large
and medium
.
<i class="small mdi-content-add"></i>
<i class="medium mdi-content-add"></i>
<i class="large mdi-content-add"></i>
See the Pen ByWVRQ by Narayan Prusty (@qnimate) on CodePen.
The tabs structure consists of an unordered list of tabs that have hashes corresponding to tab ids. Then, when you click on each tab, only the container with the corresponding tab id will become visible.
<div class="row">
<div class="col s12">
<ul class="tabs">
<li class="tab col s3"><a href="#test1">Test 1</a></li>
<li class="tab col s3"><a class="active" href="#test2">Test 2</a></li>
<li class="tab col s3"><a href="#test3">Test 3</a></li>
<li class="tab col s3"><a href="#test4">Test 4</a></li>
</ul>
</div>
<div id="test1" class="col s12">Test 1</div>
<div id="test2" class="col s12">Test 2</div>
<div id="test3" class="col s12">Test 3</div>
<div id="test4" class="col s12">Test 4</div>
</div>
See the Pen LEWrQw by Narayan Prusty (@qnimate) on CodePen.
Material box is a material design implementation of the Lightbox plugin, when a user clicks on an image that can be enlarged. Material box centers the image and enlarges it in a smooth, non-jarring manner. To dismiss the image, the user can either click on the image again, scroll away, or press the ESC key.
It is very easy to add a short caption to your photo. Just add the caption as a data-caption attribute.
<img class="materialboxed" width="300" src="https://cask.scotch.io/2015/01/784014790032582570.png" data-caption="Materialize Demo">
See the Pen PwpaRb by Narayan Prusty (@qnimate) on CodePen.
Add a dropdown list to any button. Make sure that the data-activates
attribute matches the id in the <ul>
tag.
You can add a divider with the <li class="divider"></li>
tag.
<!-- Dropdown Trigger -->
<a class='dropdown-button btn' href='#' data-activates='dropdown1'>Drop Me!</a>
<!-- Dropdown Structure -->
<ul id='dropdown1' class='dropdown-content'>
<li><a href="#!">one</a></li>
<li><a href="#!">two</a></li>
<li class="divider"></li>
<li><a href="#!">three</a></li>
</ul>
See the Pen bNqKKr by Narayan Prusty (@qnimate) on CodePen.
We covered some of the most important components of Materialize CSS Framework. Its time to put them together and create a portfolio site. Complete demo of the portfolio site which we will be creating in this tutorial.
In our portfolio site, first, we will have a big horizontal banner which displays your name and profession. We can create this using Materialize text formatting tags and little CSS.
<div class="intro deep-orange lighten-2 z-depth-1">
<h1 class="grey-text text-lighten-5">narayan prusty</h1>
<h5 class="grey lighten-4 grey-text text-darken-1">web and mobile developer</h5>
</div>
See the Pen vExrvG by Narayan Prusty (@qnimate) on CodePen.
We need to display bio, profile, and current job statuses in our portfolio. For this, we can create a 3 column grid. This grid is 3 column is medium and large screen, but on mobile phones it’s stacked.
<div class="container about">
<h5>about me</h5>
<h6>let me introduce my self</h6>
<hr>
<div class="row">
<div class="col s12 m4 l4">
<h6>Story</h6>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p>
</div>
<div class="col s12 m4 l4">
<h6>Profile</h6>
<div class="card blue-grey darken-1">
<div class="card-content white-text">
<img src="http://labs.qnimate.com/portfolio-materialize/images/profile.png" width="64" height="64">
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore.</p>
</div>
<div class="card-action">
<a href="#">Link</a>
<a href='#'>Link</a>
</div>
</div>
</div>
<div class="col s12 m4 l4">
<h6>current jobs</h6>
<ul class="collapsible">
<li class="active">
<div class="collapsible-header"><i class="mdi-av-web"></i>Designer</div>
<div class="collapsible-body"><p>Lorem ipsum dolor sit amet.</p></div>
</li>
<li>
<div class="collapsible-header"><i class="mdi-editor-format-align-justify"></i>Developer</div>
<div class="collapsible-body"><p>Lorem ipsum dolor sit amet.</p></div>
</li>
<li>
<div class="collapsible-header"><i class="mdi-av-play-shopping-bag"></i>Video Editor</div>
<div class="collapsible-body"><p>Lorem ipsum dolor sit amet.</p></div>
</li>
<li>
<div class="collapsible-header"><i class="mdi-editor-insert-comment"></i>Support Asst.</div>
<div class="collapsible-body"><p>Lorem ipsum dolor sit amet.</p></div>
</li>
</ul>
</div>
</div>
</div>
See the Pen WbpyPM by Narayan Prusty (@qnimate) on CodePen.
We used Materialize cards in Middle column. Cards are a convenient means of displaying content composed of different types of objects. They’re also well-suited for presenting similar objects whose size or supported actions can vary considerably, like photos with captions of variable length.
Here we used Material Collapsible in the last column. Collapsibles are accordion elements that expand when clicked on. They allow you to hide content that is not immediately relevant to the user.
You need to also display some of your awesome work on your portfolio. You’ll need a image of your work and a title/link.
We will display projects via floated material boxes.
<div class="container portfolio">
<h5>portfolio</h5>
<h6>MY LATEST PROJECTS</h6>
<hr>
<div class="row">
<div class="col s12 m12 l12 portfolio-holder">
<img class="materialboxed" src="http://labs.qnimate.com/portfolio-materialize/images/project.png">
<img class="materialboxed" src="http://labs.qnimate.com/portfolio-materialize/images/project.png">
<img class="materialboxed" src="http://labs.qnimate.com/portfolio-materialize/images/project.png">
<img class="materialboxed" src="http://labs.qnimate.com/portfolio-materialize/images/project.png">
<img class="materialboxed" src="http://labs.qnimate.com/portfolio-materialize/images/project.png">
<img class="materialboxed" src="http://labs.qnimate.com/portfolio-materialize/images/project.png">
</div>
</div>
</div>
See the Pen GgWGLr by Narayan Prusty (@qnimate) on CodePen.
We need to display a contact form in case the user wants to contact us. We have the capacity to display a phone number, an address, and an email address for other ways to communicate.
We can build the form using Material form classes.
<div class="container contact">
<h5>contact</h5>
<h6>get in touch with me</h6>
<hr>
<div class="row">
<div class="col s12 m6 l6">
<div class="row">
<form class="col s12">
<div class="row">
<div class="input-field col s6">
<input id="first_name" type="text" class="validate">
<label for="first_name">First Name</label>
</div>
<div class="input-field col s6">
<input id="last_name" type="text" class="validate">
<label for="last_name">Last Name</label>
</div>
</div>
<div class="row">
<div class="input-field col s12">
<input id="email" type="email" class="validate">
<label for="email">E-Mail</label>
</div>
</div>
<textarea class="materialize-textarea" placeholder="Your Message" required></textarea>
<button class="btn waves-effect waves-light" type="submit" name="action">Submit
<i class="mdi-content-send right"></i>
</button>
</form>
</div>
</div>
<div class="col s12 m6 l6 contact-holder">
<h6 class="mdi-action-home">Address</h6>
<p>Nr. 6, 21 Awesome Street, London, UK</p>
<h6 class="mdi-hardware-phone-android">Phone Number</h6>
<p>+91 9912776151</p>
<h6 class="mdi-action-open-in-browser">Website</h6>
<p>qnimate.com</p>
</div>
</div>
</div>
See the Pen vExrwX by Narayan Prusty (@qnimate) on CodePen.
At the bottom of the site we will keep a footer which displays a copyright text and a link. Materialize provides classes to create a footer in no time.
<footer>
<div class="footer-copyright">
<div class="container">
© 2014 Copyright Text
<a class="grey-text text-lighten-4 right" href="#!">Link</a>
</div>
</div>
</footer>
See the Pen WbpKbJ by Narayan Prusty (@qnimate) on CodePen.
We saw some of the features and components of Material Design. There are lot more components offered by Materialize so that you can build any kind of website frontend.
If you’re planning to create a new website or redesign your site, then I recommend to choose Materialize Framework, because there are very few sites which are material designed and your site will stand out of the crowd. You can also use Materialize in designing hybrid mobile apps. Please share your experiences with Materialize below.
]]>Google Material Design is all the rage right now. With Google announcing the new design philosophy and using Polymer to create rich animated applications, many developers are starting to incorporate these ideas into their own experiments.
We created our own Google Material Design Checkboxes using CSS3 last week and here are some more examples of Google Material Design:
Today we’ll be looking at how to recreate the Polymer input boxes using CSS. Here is an example:
http://codepen.io/sevilayha/pen/IdGKH
We will be doing all of this in CSS and just a tiny bit of JavaScript. Let’s start setting up our HTML so that we can style it and add our animations and transitions in CSS.
The HTML for this project will be very simple. We just need a form with two groups of inputs.
Note: We are working within CodePen
Here is the HTML:
<form>
<div class="group">
<input type="text" required>
<span class="highlight"></span>
<span class="bar"></span>
<label>Name</label>
</div>
<div class="group">
<input type="text" required>
<span class="highlight"></span>
<span class="bar"></span>
<label>Email</label>
</div>
</form>
Here we have the four components we need.
input
will serve as the input.highlight
will be the little highlight that flashes across the input.bar
will hold the two bars that make up the underline.label
will act as a placeholder until we click into our input. Then it will move and become a label.With our simple HTML ready to go, let’s move on to the CSS transitions and animations.
We’ll break this down into three parts: the label/placeholder, the underline, and the highlight. Let’s style the foundation so we have a good starting point.
Note: For simplicity’s sake, we won’t be adding the vendor prefixes like -moz
and -webkit
.
/* form starting stylings ------------------------------- */
.group {
position: relative;
margin-bottom: 45px;
}
input {
font-size: 18px;
padding: 10px 10px 10px 5px;
display: block;
width: 300px;
border: none;
border-bottom: 1px solid #757575;
}
input:focus {
outline: none;
}
We’re just placing things and adding some padding with the code above. We also set the group
to position:relative;
so that we can place the other elements relative to that. Now let’s start looking at animating our parts. The two techniques we’ll use are CSS transitions and CSS animations.
We will activate all of our transitions and animations when the input is focused on. In CSS, we call that using input:focus
. Let’s see how each part is created and activated.
We’ll absolutely position the label relative to the group
. Here is the code for the label and when the input is focused:
/* LABEL ======================================= */
label {
color: #999;
font-size: 18px;
font-weight: normal;
position: absolute;
pointer-events: none;
left: 5px;
top: 10px;
transition: 0.2s ease all;
}
/* active state */
input:focus ~ label,
input:valid ~ label {
top: -20px;
font-size: 14px;
color: #5264AE;
}
Now when we focus on our input, the label will change color, move up, and the font will get smaller. We also create the stylings for the :valid
pseudo-class so that we can apply that if our input box is filled in. This will let the label stay in the active state, otherwise, it will move back over the input. All done here. Let’s move on to the underline.
We will use the pseudo-classes :before
and :after
to style the left and right parts of the bar. They will start from the center and widen to the outsides. That will give our underline effect.
/* BOTTOM BARS ================================= */
.bar {
position: relative;
display: block;
width: 300px;
}
.bar:before,
.bar:after {
content: '';
height: 2px;
width: 0;
bottom: 1px;
position: absolute;
background: #5264AE;
transition: 0.2s ease all;
}
.bar:before {
left: 50%;
}
.bar:after {
right: 50%;
}
/* active state */
input:focus ~ .bar:before,
input:focus ~ .bar:after {
width: 50%;
}
This is the part of our application where we will need to use an animation. We will need to have the highlight
show up, move to the left, and disappear. Since there are three parts to this, we need to make an animation instead of a transition.
/* HIGHLIGHTER ================================== */
.highlight {
position: absolute;
height :60%;
width: 100px;
top: 25%;
left: 0;
pointer-events: none;
opacity: 0.5;
}
/* active state */
input:focus ~ .highlight {
animation: inputHighlighter 0.3s ease;
}
/* ANIMATIONS ================ */
@keyframes inputHighlighter {
from { background: #5264AE; }
to { width: 0; background: transparent; }
}
Now we have our highlight working. With all of our CSS parts working, we now have an input box similar to the Google Material Design input boxes.
Note: This method has been replaced by the method below.
We have one last thing to finalize with our implementation of these input boxes. Once typing into an input box and clicking out of it, the label moves back over the input. It now overlaps the content we just wrote. We already created a class to make the label position itself above the input box earlier with the input.used
CSS class. All we have to do now is apply it using jQuery.
$(document).ready(function() {
$('input').blur(function() {
// check if the input has any value (if we've typed into it)
if ($(this).val())
$(this).addClass('used');
else
$(this).removeClass('used');
});
});
Just like that, we’re all good to go!
http://codepen.io/sevilayha/pen/IdGKH
Thanks to Felipe Mammoli in the comments for the tip on creating this without any JS at all. All we have to do is add a required
attribute to our input boxes like so:
<input type="text" required>
Once we have added that rule, we can use the :valid
psuedo-class to check if something is typed into that input box. Now we can apply the class we had originally created to move our label above the input.
/* active state */
input:focus ~ label,
input:valid ~ label {
top: -20px;
font-size: 14px;
color: #5264AE;
}
There’s a cool way to implement Google Material input boxes in CSS. While it doesn’t have all the fancy parts of the official input boxes like animating based on the location of the click event, they’re looking pretty good! For more Material Design, take a look at the Polymer Project to create cool Material Design components. Also, here’s a cool Codepen of our demo with input validation by Don Page and another great project to create Material Design components in CSS/JS: Waves.
If you’re looking for these inputs and want to use Angular and have it validated with Angular, Martin Hotell has updated this in his awesome Angular Validated Plunker.
]]>In this tutorial, we will learn about what is pipe, how to build a custom pipe, how to make the pipe available application-wide. Live example here in plnkr.
Angular 2 comes with a stock of pipes such as DatePipe
, UpperCasePipe
, LowerCasePipe
, CurrencyPipe
, and PercentPipe
. They are all immediately available for use in any template.
For example, utilize the uppercase
pipe to display a person’s name in capital letter.
import { Component } from '@angular/core';
@Component({
selector: 'my-app',
template: '<p>My name is <strong>{{ name | uppercase }}</strong>.</p>',
})
export class AppComponent {
name = 'john doe';
}
The output of the example above is My name is John Doe
.
Now if you would like to capitalize the first letter of each word, you can create a custom pipe to do so.
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({name: 'capitalize'})
export class CapitalizePipe implements PipeTransform {
transform(value: string, args: string[]): any {
if (!value) return value;
return value.replace(/\w\S*/g, function(txt) {
return txt.charAt(0).toUpperCase() + txt.substr(1).toLowerCase();
});
}
}
To use this pipe, let modify our app component, like this:
import { Component } from '@angular/core';
@Component({
selector: 'my-app',
template: '<p>My name is <strong>{{ name | capitalize }}</strong>.</p>', // change to use capitalize pipe
})
export class AppComponent {
name = 'john doe';
}
The output of the example above is My name is John Doe
.
Now imagine that you have a pipe that you will use in almost all of your components (e.g., a translate
pipe), how can we make it globally available just like the built-in pipes?
There’s a way to do it. We can include the pipe in the application module of our application.
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from './app.component';
import { CapitalizePipe } from './capitalize.pipe'; // import our pipe here
@NgModule({
imports: [ BrowserModule ],
declarations: [ AppComponent, CapitalizePipe ], // include capitalize pipe here
bootstrap: [ AppComponent ]
})
export class AppModule { }
Angular Module is a great way to organize the application and extend it with capabilities from external libraries. It consolidate components, directives and pipes into cohesive blocks of functionality. Refer to Angular documentation.
Now, we can run our application and check the page result.
Over the last few years, browsers got some superpowers. They evolved from simple viewers for HTML & CSS to platforms executing our beloved web applications.
This is opening the doors for developers to do incredible things. And this is awesome. However, there is one big challenge that comes with every new shiny application.
With everything that is new, chances are high for bugs and issues to occur.
With Usersnap, I’ve found a great solution for that: its console log recorder enables me to record client-side JavaScript errors. One by one.
In this tutorial, I’ll walk you through the process of setting up the client-side of JavaScript error logging.
There are a couple of reasons why client-side JavaScript errors have become more and more important to log than ever before.
First, JavaScript is the most popular programming language of the web. With new JavaScript libraries going mainstream every day, developers are demonstrating that almost everything is possible inside browsers.
The short answer is: Because writing good code is hard. And reproducing client-side errors is even harder.
Especially if you’re using non-trivial JavaScript in your codebase, you will have a hard time finding those bugs. Just think about all these different browser versions and devices for a moment.
Let’s keep this short. Too few websites log JavaScript errors. – Karl Seguin
The basic problems of client-side JavaScript errors are:
At this point, you probably agree that it is time for a way to attach the console output to get high-quality bug reports.
With the use of the Usersnap console recorder I found a great solution for recording, reproducing, and fixing front-end errors.
Basically, the Usersnap Console Recorder is a tool to record every front-end error, such as XHR traces, JavaScript exceptions, etc.).
And it sends this information along with an annotated screenshot to the developer of the site.
The setup process for the Usersnap console recorder is simple:
You can sign up for a free Usersnap trial here. Create the first project and add the URLs you want to track.
After creating your first project, you need to add the URLs of your site and embed the Usersnap widget.
It’s pretty simple and there are also available plugins for various CMS available.
After you embed the javascript widget on your site or app, you’ll see a little feedback button on your site.
The feedback button is fully customizable in the project settings.
Enabling the console recorder is very easy. Open the widget configuration tab in the project settings and click on “change color and features of the widget”.
Enable the console recorder feature by clicking on the on/off checkbox.
Every submitted report will now contain the log output in the Usersnap dashboard.
You’re done. Every created bug report with the embedded Usersnap feedback widget will now attach client-side errors as well.
In our example below, we see that there is an error:
Uncaught SyntaxError: Unexpected token {
We also see the call stack, and it looks like JSON.parse
method failed.
Imagine that this JSON was generated specifically for a client’s account. Without this error log, the only way to reproduce the issue would be to log into your user’s account and repeat the same actions your user made. In most cases, this isn’t possible.
According to my experience, the console recorder works with every modern web framework out of the box.
However, there are some limitations listed in the documentation of the console recorder.
Errors that occur very early (e.g., a body onload function that fails) could be missed by the recorder. Usersnap is loaded asynchronously to not hit your page loading time and therefore this might be missed.
If you are using AngularJS, the console recorder needs a little modification from your end.
If you add this library to your Angular JS project you’ll get AngularJS errors attached to your bug reports with the console recorder.
Inject it into your dependencies in your main app like this:
angular.module('yourapp', ["usersnapLogging”])
In most cases, we cannot access our user’s account in order to reproduce certain issues. The debugging of such client-side errors might take hours or even days.
The Usersnap console recorder fixes this problem and hands us the information we need to have.
As a developer, I’m now able to solve such client-side errors quickly. And most importantly, without spending too much time on finding my client’s bugs.
]]>Single-page apps are becoming increasingly popular. Sites that mimic the single-page app behavior are able to provide the feel of a phone/tablet application. Angular helps to create applications like this easily.
We’re just going to make a simple site with a home, about, and contact page. Angular is built for much more advanced applications than this, but this tutorial will show many of the concepts needed for those larger projects.
While this can be done with just Javascript and AJAX calls, Angular will make this process easier as our app starts growing.
- script.js <!-- stores all our angular code -->
- index.html <!-- main layout -->
- pages <!-- the pages that will be injected into the main layout -->
----- home.html
----- about.html
----- contact.html
This is the simple part. We’re using Bootstrap and Font Awesome. Open up your index.html
file and we’ll add a simple layout with a navigation bar.
<!DOCTYPE html>
<html>
<head>
<!-- load bootstrap and fontawesome via CDN -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" />
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.0/css/font-awesome.css" />
<!-- load angular and angular route via CDN -->
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.25/angular.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.25/angular-route.js"></script>
<script src="script.js"></script>
</head>
<body>
<!-- HEADER AND NAVBAR -->
<header>
<nav class="navbar navbar-default">
<div class="container">
<div class="navbar-header">
<a class="navbar-brand" href="/">Angular Routing Example</a>
</div>
<ul class="nav navbar-nav navbar-right">
<li><a href="#"><i class="fa fa-home"></i> Home</a></li>
<li><a href="#about"><i class="fa fa-shield"></i> About</a></li>
<li><a href="#contact"><i class="fa fa-comment"></i> Contact</a></li>
</ul>
</div>
</nav>
</header>
<!-- MAIN CONTENT AND INJECTED VIEWS -->
<div id="main">
<!-- angular templating -->
<!-- this is where content will be injected -->
</div>
</body>
</html>
For linking to pages, we’ll use the #
. We don’t want the browser to think we are actually traveling to about.html
or contact.html
.
We’re going to set up our application. Let’s create the angular module and controller. Check out the docs for more information on each.
First, we have to create our module and controller in javascript. We will do that now in script.js
.
// create the module and name it scotchApp
var scotchApp = angular.module('scotchApp', []);
// create the controller and inject Angular's $scope
scotchApp.controller('mainController', function($scope) {
// create a message to display in our view
$scope.message = 'Main Controller message';
});
Let’s add the module and controller to our HTML so that Angular knows how to bootstrap our application. To test that everything is working, we will also show the $scope.message
variable that we created.
<!DOCTYPE html>
<!-- define angular app -->
<html ng-app="scotchApp">
<head>
<!-- load bootstrap and fontawesome via CDN -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" />
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.0/css/font-awesome.css" />
<!-- load angular via CDN -->
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.25/angular.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.25/angular-route.js"></script>
<script src="script.js"></script>
</head>
<!-- define angular controller -->
<body ng-controller="mainController">
...
<!-- MAIN CONTENT AND INJECTED VIEWS -->
<div id="main">
{{ message }}
<!-- angular templating -->
<!-- this is where content will be injected -->
</div>
Inside of our main
div, we will now see the message that we created. Since we have our module and controller set up and we know that Angular is working properly, we will start working on using this layout to show the different pages.
ng-view
is an Angular directive that will include the template of the current route (/home, /about, or /contact) in the main layout file. In plain words, it takes the file we want based on the route and injects it into our main layout (index.html
).
We will add the ng-view
code to our site in the div#main
to tell Angular where to place our rendered pages.
...
<!-- MAIN CONTENT AND INJECTED VIEWS -->
<div id="main">
<!-- angular templating -->
<!-- this is where content will be injected -->
<div ng-view></div>
</div>
...
Since we are making a single-page application and we don’t want any page refreshes, we’ll use Angular’s routing capabilities.
Let’s look at our Angular file and add it to our application. We will be using $routeProvider
in Angular to handle our routing. This way, Angular will handle all of the magic required to go get a new file and inject it into our layout.
AngularJS 1.2 and Routing The ngRoute module is no longer included in Angular after version 1.1.6. You will need to call the module and add it to the head of your document to use it. This tutorial has been updated for AngularJS 1.2
// create the module and name it scotchApp
// also include ngRoute for all our routing needs
var scotchApp = angular.module('scotchApp', ['ngRoute']);
// configure our routes
scotchApp.config(function($routeProvider) {
$routeProvider
// route for the home page
.when('/', {
templateUrl : 'pages/home.html',
controller : 'mainController'
})
// route for the about page
.when('/about', {
templateUrl : 'pages/about.html',
controller : 'aboutController'
})
// route for the contact page
.when('/contact', {
templateUrl : 'pages/contact.html',
controller : 'contactController'
});
});
// create the controller and inject Angular's $scope
scotchApp.controller('mainController', function($scope) {
// create a message to display in our view
$scope.message = 'Main Controller message.';
});
scotchApp.controller('aboutController', function($scope) {
$scope.message = 'About Controller message.';
});
scotchApp.controller('contactController', function($scope) {
$scope.message = 'Contact Controller message.';
});
Now we have defined our routes with $routeProvider
. As you can see by the configuration, you can specify the route, the template file to use, and even a controller. This way, each part of our application will use its own view and Angular controller.
Clean URLs: By default, Angular will throw a hash (#
) into the URL. To get rid of this, we will need to use $locationProvider
to enable the HTML History API. This will remove the hash and make pretty URLs. For more information: Pretty URLs in AngularJS: Removing the #.
Our home page will pull the home.html
file. About and contact will pull their respective files. Now if we view our app, and click through the navigation, our content will change just how we wanted.
To finish off this tutorial, we just need to define the pages that will be injected. We will also have them each display a message from its respective controller.
<div class="jumbotron text-center">
<h1>Home Page</h1>
<p>{{ message }}</p>
</div>
<div class="jumbotron text-center">
<h1>About Page</h1>
<p>{{ message }}</p>
</div>
<div class="jumbotron text-center">
<h1>Contact Page</h1>
<p>{{ message }}</p>
</div>
Working Locally: Angular routing will only work if you have an environment set for it. Make sure you are using http://localhost
or some sort of environment. Otherwise Angular will spit out a Cross origin requests are only supported for HTTP
.
Once you have all the routing done, you can start to get really fancy with your site and add in animations. To do this, you will need the ngAnimate module provided by Angular. After that, you can animate your pages into view with CSS animations.
For a tutorial on how to get animations on your site, read: Animating AngularJS Apps: ngView.
Ideally, this technique would be used for an application after a person has signed in. You wouldn’t really want those pages indexed since they are personalized to that specific user. For example, you wouldn’t want your Reader account, Facebook logged in pages, or Blog CMS pages indexed.
If you did want SEO for your application though, how does SEO work for applications/sites that get their pages built with Javascript? Search engines have a difficult time processing these applications because the content is built dynamically by the browser and not visible to crawlers.
Techniques to make JavaScript single-page applications SEO friendly require regular maintenance. According to the official Google suggestions, you would create HTML snapshots. The basic overview of how it would work is that:
https://scotch.io/seofriendly#key=value
)For more information on this process, be sure to look at Google’s AJAX Crawling and their guide on creating HTML snapshots.
SEO Article: We’ve written up a tutorial on how to make Angular SEO friendly. Give it a read if you’re interested: AngularJS SEO with Prerender.io.
This was a very simple tutorial on how to get Angular routing to work with a layout and separate views. Now you can go ahead and create larger single-page applications. There is much more to learn with Angular and I’ll keep writing about different features along my learning journey of Angular.
If anyone has any suggestions for future Angular articles or different ways to do what we’ve just done here (there are so many ways to write the same thing, it can drive a person insane), sound off in the comments.
If you are looking for more flexibility in routing like nested views and state-based templating instead of route-based, then you’ll definitely be interested in UI Router. For an article on UI Router: AngularJS Routing Using UI-Router
Note: Added information on SEO for using this technique.
Note: Updated article for AngularJS 1.2
There are two ways to build forms in Angular 2, namely template-driven and model-driven.
In this article, we will learn about building model-driven form with validation using the latest forms module, then we will talk about what are the advantages / disadvantages of using model driven form as compared to template-driven form.
Please refer to How to Build Template-driven Forms in Angular 2 if you would like to learn about template-driven forms.
View Angular 2 - Model Driven Forms (final) scotch on plnkr
We will build a form to capture user information based on this interface.
export interface User {
name: string; // required with minimum 5 characters
address?: {
street?: string; // required
postcode?: string;
}
}
Here is how the UI will look:
Here’s our file structure:
|- app/
|- app.component.html
|- app.component.ts
|- app.module.ts
|- main.ts
|- user.interface.ts
|- index.html
|- styles.css
|- tsconfig.json
In order to use new forms module, we need to npm install @angular/forms
npm package and import the reactive forms module in application module.
- npm install @angular/forms --save
Here’s the module for our application app.module.ts
:
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { FormsModule, ReactiveFormsModule } from '@angular/forms';
import { AppComponent } from './app.component';
@NgModule({
imports: [ BrowserModule, ReactiveFormsModule ],
declarations: [ AppComponent ],
bootstrap: [ AppComponent ]
})
export class AppModule { }
Let’s move on to create our app component.
import { Component, OnInit } from '@angular/core';
import { FormGroup, FormControl, FormBuilder, Validators } from '@angular/forms';
import { User } from './user.interface';
@Component({
moduleId: module.id,
selector: 'my-app',
templateUrl: 'app.component.html',
})
export class AppComponent implements OnInit {
public myForm: FormGroup; // our model driven form
public submitted: boolean; // keep track on whether form is submitted
public events: any[] = []; // use later to display form changes
constructor(private _fb: FormBuilder) { } // form builder simplify form initialization
ngOnInit() {
// we will initialize our form model here
}
save(model: User, isValid: boolean) {
this.submitted = true; // set form submit to true
// check if model is valid
// if valid, call API to save customer
console.log(model, isValid);
}
}
myForm
will be our model driven form. It implements FormGroup
interface.FormBuilder
is not a mandatory to building model driven form, but it simplify the syntax, we’ll cover this later.This is how our HTML view will look like.
<form [formGroup]="myForm" novalidate (ngSubmit)="save(myForm.value, myForm.valid)">
<!-- We'll add our form controls here -->
<button type="submit">Submit</button>
</form>
We make sure we bind formGroup
to our myForm
property in app.component.ts
file.
We’ll handle the form submit (ngSubmit
) event in save()
function that we defined in our app.component.ts
file.
All set! Let’s implement our model-driven form.
There are two ways to initialize our form model using model-driven forms in Angular 2.
Here is the long way to define a form:
ngOnInit() {
// the long way
this.myForm = new FormGroup({
name: new FormControl('', [<any>Validators.required, <any>Validators.minLength(5)]),
address: new FormGroup({
street: new FormControl('', <any>Validators.required),
postcode: new FormControl('8000')
})
});
}
And here’s the short way (using the form builder):
ngOnInit() {
// the short way
this.myForm = this._fb.group({
name: ['', [<any>Validators.required, <any>Validators.minLength(5)]],
address: this._fb.group({
street: ['', <any>Validators.required],
postcode: ['']
})
});
}
Both of these options will achieve the same outcome. The latter just has a simpler syntax.
A form is a type of FormGroup. A FormGroup can contain one FormGroup or FormControl. In our case, myForm is a FormGroup. It contains:
The address FormGroup contains 2 form controls:
We can define a validator for both FormGroup and FormControl. Both accept either a single validator or array of validators.
Angular 2 comes with a few default validators and we can build our custom validator too. In our case, name has two validators:
Street has only one required validator.
Let’s add the user’s name control to our view.
<!-- app.component.html -->
...
<!-- We'll add our form controls here -->
<div>
<label>Name</label>
<input type="text" formControlName="name">
<small [hidden]="myForm.controls.name.valid || (myForm.controls.name.pristine && !submitted)">
Name is required (minimum 5 characters).
</small>
</div>
...
formControl
has no export value, we need to read the errors information from our form model.Next we’ll add our address form group to the view.
....
<div formGroupName="address">
<label>Address</label>
<input type="text" formControlName="street">
<small [hidden]="myForm.controls.address.controls.street.valid || (myForm.controls.address.controls.street.pristine && !submitted)">
street required
</small>
</div>
<div formGroupName="address">
<label>Postcode</label>
<input type="text" formControlName="postcode">
</div>
...
We have assigned the group name address to formGroupName. Please note that formGroupName can be used multiple times in the same form. In many examples, you’ll see people do this:
...
<div formGroupName="address">
<input formControlName="street">
<input formControlName="postcode">
</div>
...
This gives us the same results as above:
...
<div formGroupName="address">
<input formControlName="street">
</div>
<div formGroupName="address">
<input formControlName="postcode">
</div>
...
This is the same process as the previous section to bind form control.
Now the syntax gets even longer to retrieve control information. Oh my, myForm.controls.address.controls.street.valid
.
Now, imagine we need to assign default user’s name John to the field. How can we do that?
The easiest way is if John is static value:
...
this.myForm = this._fb.group({
name: ['John', [ <any>Validators.required,
<any>Validators.minLength(5)]]
});
...
What if John is not a static value? We only get the value from API call after we initialize the form model. We can do this:-
...
(<FormControl>this.myForm.controls['name'])
.setValue('John', { onlySelf: true });
...
The form control exposes a function call setValue
which we can call to update our form control value.
setValue
accept optional parameter. In our case, we pass in { onlySelf: true }, mean this change will only affect the validation of this control and not its parent component.
By default this.myForm.controls[‘name’] is of type AbstractControl. AbstractControl is the base class of FormGroup and FormControl. Therefore, we need to cast it to FormControl in order to utilize control specific function.
It’s possible! We can do something like this:
...
const people = {
name: 'Jane',
address: {
street: 'High street',
postcode: '94043'
}
};
(<FormGroup>this.myForm)
.setValue(people, { onlySelf: true });
...
Now that we’ve build our model-driven form. What are the advantages of using it over template-driven form?
Since we have the form model defined in our code, we can unit test it. We won’t discuss detail about testing in this article.
With reactive forms, we can listen to form or control changes easily. Each form group or form control expose a few events which we can subscribe to (e.g., statusChanges, valuesChanges, etc.).
Let say we want to do something every time when any form values changed. We can do this:-
subcribeToFormChanges() {
// initialize stream
const myFormValueChanges$ = this.myForm.valueChanges;
// subscribe to the stream
myFormValueChanges$.subscribe(x => this.events
.push({ event: ‘STATUS CHANGED’, object: x }));
}
Then call this function in our ngOnInit()
.
ngOnInit() {
// ...omit for clarity...
// subscribe to form changes
this.subcribeToFormChanges();
}
Then display all value changes event in our view.
...
Form changes:
<div *ngFor="let event of events">
<pre> {{ event | json }} </pre>
</div>
...
We can imagine more advanced use cases such as changing form validation rules dynamically depends on user selection, etc. Model driven form makes this simpler.
It depends. If you are not doing unit testing (of course you should!), or you have simple form, go ahead with template-driven forms.
If you are not doing unit testing or you have simple form, go ahead with template-driven forms.
If you have advanced use cases, then consider model driven form.
Something good about template-driven forms as compared to model driven forms, imho:
That’s it! Now that you know how to build model-driven form, how about complex and nested model-driven forms? Says, we allow the user to enter multiple addresses now, how can we handle form array and validation? You might be interest in How to Build Nested Model-driven Forms in Angular 2.
Happy coding.
]]>Development folks work tirelessly to make building programs as easy as possible. The JavaScript, Web, and Mobile app developers communities have increased drastically since Node and Cordova were introduced. Developers who had web design skills could, with less effort, roll out a server using JavaScript for their applications, through the help of Node.js.
Mobile lovers can with the help of Cordova now build rich hybrid apps using just JavaScript. Today, although it is old news, I am excited to share the ability to use JavaScript to build desktop standalone applications.
Node WebKit normally written: “node-webkit” or “NW.js” is an app runtime based on Node.js and Chromium and enables us to develop OS native apps using just HTML, CSS, and JavaScript.
Simply put, Node WebKit just helps you utilize your skill as a web developer to build a native application that runs comfortably on Mac, Windows, and Linux with just a grunt/gulp (if preferred) build command.
This article concentrates a lot more on using Node WebKit, but in order to make things more interesting, we will be including other amazing solutions and they will include:
Furthermore, the application has three sections:
The web section will not be covered here, but it will serve as a test platform but don’t worry, the code will be provided.
Level: Intermediate (Knowledge of MEAN is required)
We need to grab node-webkit
and other dependencies for our application. Fortunately, there are frameworks that make workflow easy and we will be using one of them to scaffold our application and concentrate more on the implementation.
Yo and Slush are popular generators and any of these will work. I am going to be using Slush, but feel free to use Yo if you prefer to. To install Slush make sure you have node and npm installed and run
- npm install -g slush gulp bower slush-wean
The command will install the following globally on our system.
slush-wean
: the generator for Node WebKitbower
: for frontend dependenciesJust like YO, make your directory and scaffold your app using:
- mkdir scotch-chat
- cd scotch-chat
- slush wean
Running the below command will give us a glance of what we have been waiting for:
- gulp run
The image shows our app loading. The author of the generator was generous enough to provide a nice template with a simple loading animation. To look cooler, I replaced the loading text with Scotch’s logo.
If you are not comfortable with Slush automating things you can head right to Node WebKit on GitHub.
Now that we have set up our app, though empty, we will give it a break and prepare our server now.
The server basically consists of our model, routes, and socket events. We will keep it as simple as possible and you can feel free to extend the app as instructed at the end of the article.
Setup a folder in your PC at your favorite directory, but make sure the folder content looks like the below:
|- public
|- index.html
|- server.js
|- package.json
In the package.json
file located on your root directory, create a JSON file to describe your application and include the application’s dependencies.
{
"name": "scotch-chat",
"main": "server.js",
"dependencies": {
"mongoose": "latest",
"morgan": "latest",
"socket.io": "latest"
}
}
That will do. It is just a minimal setup and we are keeping things simple and short. Run npm install
on the directory root to install the specified dependencies.
- npm install
It is time to get our hands dirty! The first thing is to set up global variables in server.js
which will hold the application’s dependencies that are already installed.
// Import all our dependencies
var express = require('express');
var mongoose = require('mongoose');
var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);
Ok, I didn’t keep to my word. The variables are not only holding the dependencies, but some are configuring it for use.
To serve static files, express exposes a method to help configure the static files folder. It is simple:
...
// tell express where to serve static files from
app.use(express.static(__dirname + '/public'));
Next up is to create a connection to our database. I am working with a local MongoDB which obviously is optional as you can find it’s hosted by Mongo databases. Mongoose is a node module that exposes amazing API which makes working with MongoDB a lot much easier.
...
mongoose.connect("mongodb://127.0.0.1:27017/scotch-chat");
With Mongoose we can now create our database schema and model. We also need to allow CORS in the application as we will be accessing it from a different domain.
...
// create a schema for chat
var ChatSchema = mongoose.Schema({
created: Date,
content: String,
username: String,
room: String
});
// create a model from the chat schema
var Chat = mongoose.model('Chat', ChatSchema);
// allow CORS
app.all('*', function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
res.header('Access-Control-Allow-Headers', 'Content-type,Accept,X-Access-Token,X-Key');
if (req.method == 'OPTIONS') {
res.status(200).end();
} else {
next();
}
});
Our server will have three routes in it. A route to serve the index file, another to set up chat data, and the last to serve chat messages filtered by room names:
/*||||||||||||||||||||||ROUTES|||||||||||||||||||||||||*/
// route for our index file
app.get('/', function(req, res) {
//send the index.html in our public directory
res.sendfile('index.html');
});
//This route is simply run only on first launch just to generate some chat history
app.post('/setup', function(req, res) {
//Array of chat data. Each object properties must match the schema object properties
var chatData = [{
created: new Date(),
content: 'Hi',
username: 'Chris',
room: 'php'
}, {
created: new Date(),
content: 'Hello',
username: 'Obinna',
room: 'laravel'
}, {
created: new Date(),
content: 'Ait',
username: 'Bill',
room: 'angular'
}, {
created: new Date(),
content: 'Amazing room',
username: 'Patience',
room: 'socet.io'
}];
//Loop through each of the chat data and insert into the database
for (var c = 0; c < chatData.length; c++) {
//Create an instance of the chat model
var newChat = new Chat(chatData[c]);
//Call save to insert the chat
newChat.save(function(err, savedChat) {
console.log(savedChat);
});
}
//Send a resoponse so the serve would not get stuck
res.send('created');
});
//This route produces a list of chat as filterd by 'room' query
app.get('/msg', function(req, res) {
//Find
Chat.find({
'room': req.query.room.toLowerCase()
}).exec(function(err, msgs) {
//Send
res.json(msgs);
});
});
/*||||||||||||||||||END ROUTES|||||||||||||||||||||*/
The first route I believe is easy enough. It will just send our index.html
file to our users.
The second /setup
is meant to be hit just once and at the initial launch of the application. It is optional if you don’t need some test data. It basically creates an array of chat messages (which matches the schema), loops through them, and inserts them into the database.
The third route /msg
is responsible for fetching chat history filtered with room names and returned as an array of JSON objects.
The most important part of our server is the real-time logic. Keeping in mind that we are working towards producing a simple application, our logic will be comprehensively minimal. Sequentially, we need to:
Therefore:
/*||||||||||||||||SOCKET|||||||||||||||||||||||*/
//Listen for connection
io.on('connection', function(socket) {
//Globals
var defaultRoom = 'general';
var rooms = ["General", "angular", "socket.io", "express", "node", "mongo", "PHP", "laravel"];
//Emit the rooms array
socket.emit('setup', {
rooms: rooms
});
//Listens for new user
socket.on('new user', function(data) {
data.room = defaultRoom;
//New user joins the default room
socket.join(defaultRoom);
//Tell all those in the room that a new user joined
io.in(defaultRoom).emit('user joined', data);
});
//Listens for switch room
socket.on('switch room', function(data) {
//Handles joining and leaving rooms
//console.log(data);
socket.leave(data.oldRoom);
socket.join(data.newRoom);
io.in(data.oldRoom).emit('user left', data);
io.in(data.newRoom).emit('user joined', data);
});
//Listens for a new chat message
socket.on('new message', function(data) {
//Create message
var newMsg = new Chat({
username: data.username,
content: data.message,
room: data.room.toLowerCase(),
created: new Date()
});
//Save it to database
newMsg.save(function(err, msg){
//Send message to those connected in the room
io.in(msg.room).emit('message created', msg);
});
});
});
/*||||||||||||||||||||END SOCKETS||||||||||||||||||*/
Then the traditional server start:
server.listen(2015);
console.log('It\'s going down in 2015');
Fill the index.html
with any HTML that suits you and run node server.js
. localhost:2015
will give you the content of your HTML.
Time to dig up what we left to create our server which is running currently. This section is quite easy as it just requires your everyday knowledge of HTML, CSS, JS, and Angular.
We don’t need to create any! I guess that was the inspiration for generators. The first file you might want to inspect is the package.json
.
Node WebKit requires, basically, two major files to run:
index.html
)package.json
to tell it where the entry point is locatedpackage.json
has the basic content we are used to, except that its main is the location of the index.html
, and it has a set of configurations under "window":
from which we define all the properties of the app’s window including icons, sizes, toolbar, frame, etc.
Unlike the server, we will be using bower to load our dependencies as it is a client application. Update your bower.json
dependencies to:
"dependencies": {
"angular": "^1.3.13",
"angular-material" : "^0.10.0",
"angular-socket-io" : "^0.7.0",
"angular-material-icons":"^0.5.0",
"animate.css":"^3.0.0"
}
For a shortcut, just run the following command:
- bower install --save angular angular-material angular-socket-io angular-material-icons animate.css
Now that we have our frontend dependencies, we can update our views/index.ejs
to:
<html><head>
<title>scotch-chat</title>
<link rel="stylesheet" href="css/app.css">
<link rel="stylesheet" href="css/animate.css">
<link rel="stylesheet" href="libs/angular-material/angular-material.css">
<script src="libs/angular/angular.js"></script>
<script src="http://localhost:2015/socket.io/socket.io.js"></script>
<script type="text/javascript" src="libs/angular-animate/angular-animate.js"></script>
<script type="text/javascript" src="libs/angular-aria/angular-aria.js"></script>
<script type="text/javascript" src="libs/angular-material/angular-material.js"></script>
<script type="text/javascript" src="libs/angular-socket-io/socket.js"></script>
<script type="text/javascript" src="libs/angular-material-icons/angular-material-icons.js"></script>
<script src="js/app.js"></script>
</head>
<body ng-controller="MainCtrl" ng-init="usernameModal()">
<md-content>
<section>
<md-list>
<md-subheader class="md-primary header">Room: {{room}} <span align="right">Userame: {{username}} </span> </md-subheader>
<md-whiteframe ng-repeat="m in messages" class="md-whiteframe-z2 message" layout layout-align="center center">
<md-list-item class="md-3-line">
<img ng-src="img/user.png" class="md-avatar" alt="User" />
<div class="md-list-item-text">
<h3>{{ m.username }}</h3>
<p>{{m.content}}</p>
</div>
</md-list-item>
</md-whiteframe>
</md-list>
</section>
<div class="footer">
<md-input-container>
<label>Message</label>
<textarea ng-model="message" columns="1" md-maxlength="100" ng-enter="send(message)"></textarea>
</md-input-container>
</div>
</md-content>
</body>
</html>
We included all our dependencies and custom files (app.css and app.js). Things to note:
ng-repeat
and rendering its values to the browserENTER
key is pressedinit
, the user is asked for a preferred usernameThe main part of this section is the app.js
file. It creates services to interact with the Node WebKit GUI, a directive to handle the ENTER
keypress and the controllers (main and dialog).
//Load angular
var app = angular.module('scotch-chat', ['ngMaterial', 'ngAnimate', 'ngMdIcons', 'btford.socket-io']);
//Set our server url
var serverBaseUrl = 'http://localhost:2015';
//Services to interact with nodewebkit GUI and Window
app.factory('GUI', function () {
//Return nw.gui
return require('nw.gui');
});
app.factory('Window', function (GUI) {
return GUI.Window.get();
});
//Service to interact with the socket library
app.factory('socket', function (socketFactory) {
var myIoSocket = io.connect(serverBaseUrl);
var socket = socketFactory({
ioSocket: myIoSocket
});
return socket;
});
Next up, we create three Angular services. The first service helps us get that Node WebKit GUI object, the second returns its Window property, and the third bootstraps Socket.io with the base URL.
//ng-enter directive
app.directive('ngEnter', function () {
return function (scope, element, attrs) {
element.bind("keydown keypress", function (event) {
if (event.which === 13) {
scope.$apply(function () {
scope.$eval(attrs.ngEnter);
});
event.preventDefault();
}
});
};
});
The above snippet is one of my favorites ever since I have been using Angular. It binds an event to the ENTER
key, which thereby an event can be triggered when the key is pressed.
Finally, with the app.js
is the almighty controller. We need to break things down to ease understanding as we did in our server.js
. The controller is expected to:
ENTER
key.With our objectives defined let us code:
//Our Controller
app.controller('MainCtrl', function ($scope, Window, GUI, $mdDialog, socket, $http){
//Menu setup
//Modal setup
//listen for new message
//Notify server of the new message
});
That is our controller’s skeleton with all of its dependencies. As you can see, it has four internal comments which is serving as a placeholder for our codes as defined in the objectives. So let’s pick on the menu.
//Global Scope
$scope.messages = [];
$scope.room = "";
//Build the window menu for our app using the GUI and Window service
var windowMenu = new GUI.Menu({
type: 'menubar'
});
var roomsMenu = new GUI.Menu();
windowMenu.append(new GUI.MenuItem({
label: 'Rooms',
submenu: roomsMenu
}));
windowMenu.append(new GUI.MenuItem({
label: 'Exit',
click: function () {
Window.close()
}
}));
We simply created instances of the menu and appended some menu (Rooms and Exit) to it. The rooms menu is expected to serve as a drop-down and so we have to ask the server for available rooms and append it to the rooms menu:
//Listen for the setup event and create rooms
socket.on('setup', function (data) {
var rooms = data.rooms;
for (var r = 0; r < rooms.length; r++) {
//Loop and append room to the window room menu
handleRoomSubMenu(r);
}
//Handle creation of room
function handleRoomSubMenu(r) {
var clickedRoom = rooms[r];
//Append each room to the menu
roomsMenu.append(new GUI.MenuItem({
label: clickedRoom.toUpperCase(),
click: function () {
//What happens on clicking the rooms? Swtich room.
$scope.room = clickedRoom.toUpperCase();
//Notify the server that the user changed his room
socket.emit('switch room', {
newRoom: clickedRoom,
username: $scope.username
});
//Fetch the new rooms messages
$http.get(serverBaseUrl + '/msg?room=' + clickedRoom).success(function (msgs) {
$scope.messages = msgs;
});
}
}));
}
//Attach menu
GUI.Window.get().menu = windowMenu;
});
The above code with the help of a function, loops through an array of rooms when they are available from the server and then append them to the rooms menu. With that, Objective #1 is completed.
Our second objective is to ask the user for username using angular material modal.
$scope.usernameModal = function (ev) {
//Launch Modal to get username
$mdDialog.show({
controller: UsernameDialogController,
templateUrl: 'partials/username.tmpl.html',
parent: angular.element(document.body),
targetEvent: ev,
})
.then(function (answer) {
//Set username with the value returned from the modal
$scope.username = answer;
//Tell the server there is a new user
socket.emit('new user', {
username: answer
});
//Set room to general;
$scope.room = 'GENERAL';
//Fetch chat messages in GENERAL
$http.get(serverBaseUrl + '/msg?room=' + $scope.room).success(function (msgs) {
$scope.messages = msgs;
});
}, function () {
Window.close();
});
};
As specified in the HTML, on init, the usernameModal
is called. It uses the mdDialog
service to get username of a joining user and if that is successful it will assign the username entered to a binding scope, notify the server about that activity and then push the user to the default (GENERAL) room. If it is not successful we close the app. Objective #2 completed!
//Listen for new messages (Objective 3)
socket.on('message created', function (data) {
//Push to new message to our $scope.messages
$scope.messages.push(data);
//Empty the textarea
$scope.message = "";
});
//Send a new message (Objective 4)
$scope.send = function (msg) {
//Notify the server that there is a new message with the message as packet
socket.emit('new message', {
room: $scope.room,
message: msg,
username: $scope.username
});
};
The third, and the last, objective is simple. #3 just listens for messages and if any push it to the array of existing messages and #4 notifies the server of new messages when they are created. At the end of app.js
, we create a function to serve as the controller for the Modal:
//Dialog controller
function UsernameDialogController($scope, $mdDialog) {
$scope.answer = function (answer) {
$mdDialog.hide(answer);
};
}
To fix some ugly looks, update the app.css
.
body {
background: #fafafa !important;
}
.footer {
background: #fff;
position: fixed;
left: 0px;
bottom: 0px;
width: 100%;
}
.message.ng-enter {
-webkit-animation: zoomIn 1s;
-ms-animation: zoomIn 1s;
animation: zoomIn 1s;
}
Note the last style. We are using ngAnimate
and animate.css
to create a pretty animation for our messages.
I already wrote on how you can play with this concept here.
I can guess what you are worried about after looking at the image! The address bar, right? This is where the window
configuration in the package.json
comes in. Just change "toolbar": true
to "toolbar": false
.
I also set my icon to "icon": "app/public/img/scotch.png"
to change the window icon to the Scotch logo. We can also add notification once there is a new message:
var options = {
body: data.content
};
var notification = new Notification("Message from: "+data.username, options);
notification.onshow = function () {
// auto close after 1 second
setTimeout(function () {
notification.close();
}, 2000);
}
And even more fun…
I suggest you test the application by downloading the web client from Git Hub. Run the server, then the web client, and then the app. Start sending messages from both the app and the web client and watch them appear in real-time if you are sending them in the same room.
If you want to challenge yourself further, you can try to add the following to our app
gulp deploy --{{platform}}
eg: gulp deploy --mac
. * etc…I am glad we made it to the end. Node WebKit is an amazing concept. Join the community and make building apps easier. Hope you had a lot of scotch today and that I made someone smile…
]]>Laravel 5.3 has just been released and there are a ton of great new features. One of the major improvements is in how you send mail in your applications.
Let’s take a look at sending emails. Before Laravel 5.3, sending emails in Laravel looked a lot like this.
Mail::send('emails.send', ['title' => $title, 'message' => $message], function ($message)
{
$message->from('no-reply@example.com', 'Bat Signal');
$message->to('batman@example.com');
});
This method worked for a while, but after sending a couple of emails, the codebase got messy. Since I didn’t really like this method, I found myself using event listeners to build emails.
At the time of this article, installing Laravel 5.3 with Laravel installer is as simple as.
- laravel new project
Mailables in Laravel abstracts building emails with a mailable class. Basically, mailables are responsible for collating data and passing them to views. Meanwhile, the API for sending emails got really simple.
To send emails in Laravel, all we have to do now is.
Mail::to('batman@example.com')->send(new KryptoniteFound);
Don’t get me wrong, the previous API will work just fine (and it will still work in your applications) — it’s just that the Mail
API got a whole lot simpler.
With artisan our super handy Laravel CLI-tool, we can simply create a mailable like this.
- php artisan make:mail <NameOfMailable>
Since our mailable’s name is KryptoniteFound
, we can create our mailable using this command.
- php artisan make:mail KryptoniteFound
After we’ve created our mailable, in app/mail, we can see our newly created mailable class.
namespace App\Mail;
use Illuminate\Bus\Queueable;
use Illuminate\Mail\Mailable;
use Illuminate\Queue\SerializesModels;
use Illuminate\Contracts\Queue\ShouldQueue;
class KryptoniteFound extends Mailable
{
use Queueable, SerializesModels;
public function __construct()
{
//
}
public function build()
{
return $this->view('view.name');
}
}
The created class should look like the snippet above (comments stripped).
As we can see, the build
method builds the message. For our case, we can replace the view.name
with the path to our email view email.kryptonite-found
.
In resources/views
create a new blade template in an email
folder called kryptonite-found.blade.php
.
Any public property on your mailable class is automatically made available to your view file. So passing data to your views is as simple as making the data public on the mailable class.
Say for example I wanted to pass the weight of Kryptonite found, all I need to do is expose the total on the mailable like this.
public $total = 30;
While in our view template, we access the data like a normal variable.
<h1>Woot Woot!!</h1>
<p>Alfred just found <strong>{{ $total }}lbs</strong> of kryptonite</p>
We can also explicitly set data using the with
method.
public function build()
{
return $this->view('emails.kryptonite-found')
->with($key, $value);
}
To send emails, we will use mailtrap. For other service configurations like Amazon SES, Chris wrote an article on the topic.
Next, we move to our .env
file and configure our mail credentials.
MAIL_DRIVER="smtp"
MAIL_HOST="mailtrap.io"
MAIL_PORT=2525
MAIL_USERNAME=MAIL_USERNAME
MAIL_PASSWORD=MAIL_PASSWORD
MAIL_ENCRYPTION=null
You replace MAIL_USERNAME
and MAIL_PASSWORD
with your mailtrap details. Using the credentials above won’t work.
Still, on the issue of configuring mail, we also need to configure the mailers from
details. In config/mail.php
, look for an array from
key and configure the address
and name
. If you are satisfied with the defaults, you can just leave it. But, under no condition can it be null.
Adding bcc
, cc
and the rest can be called on $this
in the build
method.
public function build()
{
$address = 'ignore@example.com';
$name = 'Ignore Me';
$subject = 'Krytonite Found';
return $this->view('emails.kryptonite-found')
->from($address, $name)
->cc($address, $name)
->bcc($address, $name)
->replyTo($address, $name)
->subject($subject);
}
Previously, our routes were located in app/Http/routes.php
, but with this release, routes are now in app/routes
. Now we can use routes based on the interface (web, API, or console). For this tutorial, we only need the web.php
routes file.
use App\Mail\KryptoniteFound;
Route::get('/', function () {
// send an email to "batman@batcave.io"
Mail::to('batman@batcave.io')->send(new KryptoniteFound);
return view('welcome');
});
Now, we can start our Laravel server by running the following artisan command.
- php artisan serve
We can trigger an email by visiting http://localhost:8080
.
To queue emails, instead of calling send on the Mail::to
, just call the queue
method and pass it the mailable.
Mail::to('batman@example.com')->queue(new KryptoniteFound);
To delay a queued email, we use the later
method. This method takes in two parameters. The first is a DateTime
object that represents the time to wait. The second parameter is mailable.
$when = Carbon\Carbon::now()->addMinutes(10);
Mail::to('batman@example.com')->later($when, new KryptoniteFound);
Laravel 5.3 offers a lot of promising features like notifications, OAuth, search, etc. Watch out for them.
]]>Every developer will love this saying “It is hard to build software without using a build tool.” To get rid of the repetitive tasks, we are using build tools. If you think Gulp has killed Grunt you may want to think about another tool because npm has surpassed both.
Now Node provides a great way to implement a build process with only npm.
Imagine a situation where using build tools makes you horrible, I felt the same thing when I use Grunt and Gulp. Now that I have been using npm as a build tool, I feel more comfortable. Here I will share with you how to do the same and make yourself comfortable while using the build tool.
At the end of this article, we will be making our own boilerplate.
When using Grunt or Gulp, the packages specific to that build tool are usually just wrappers on the main package. For instance, gulp-sass
is really a Gulp-specific wrapper to node-sass
. We can go straight to the source and just use node-sass
with npm!
There are drawbacks to using Grunt/Gulp-specific packages.
&&
to combine multiple tasks.Gruntfile.js
) for tasks. Here only package.json
file is enough.package.json
file.Let’s start our build commands!
Create an empty directory and initialize it as npm using npm init
. It will ask you to construct your package.json
file. If you feel lazy like me to hit enter many times, then go with shorthand script npm init --yes
.
Now check your directory, a package.json
file gets created like this:
{
"name": "your_directory_name",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
By default, a test script will get created inside the script object. Inside the script
object, we are going to configure our tasks.
Run the default task using npm test
shorthand for npm run test
:
It states that node_modules
missing. We have to add our dependencies.
dev
dependencies first:- npm i -D jshint lite-server mocha concurrently node-sass uglify-js
"scripts": {
...
"dev": "lite-server"
}
npm run dev
- I have used it as a development server. lite-server
will take provide browser-sync
for live-reloading. We don’t need to configure a watch property for all your files like (HTML, CSS, JS).
To know more about lite-server
refer docs.
"scripts": {
...
"db": "json-server --watch db.json --port 3005"
}
npm run db
- If you want to know more about JSON-Server refer my article.
"scripts": {
...
"start": "concurrently -k \"npm run dev\" \"npm run db\""
}
npm start
shorthand for npm run start
. Concurrently, using it we can perform two tasks simultaneously. You can also combine both the tasks using &&
operator. To know more about it refer docs.
"scripts": {
...
"uglify": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js"
}
npm run uglify
- It will minify your JavaScript files and move them into your desired directory. It will create a new folder only if it does not already exist (-p flag
).
"scripts": {
...
"lint": "jshint src/**.js"
}
npm run lint
- It will look for any JavaScript files inside the source folder and helps detect errors and potential problems in your JavaScript code.
"scripts": {
...
"sass": "node-sass --include-path scss scss/main.scss assets/main.css"
}
npm run sass
- It allows compiling your .scss files to CSS automatically and at a good speed.
"scripts": {
...
"test": "mocha test"
}
npm test
shorthand for npm run test
. Mocha is a JavaScript test framework, which helps you to write test cases.
"scripts": {
...
"bash": "Location of the Bash/Shell script file"
}
npm run bash
- If you think you’re making a lot of commands inside the scripts
object, you can make it as Bash/Shell script and include it in your package.json
file as above.
So far we have seen the basic npm build commands and explanations for them. Let’s start to prepare our own boilerplate. Using this boilerplate will save your time in preparing the build tool. Allowing you to invest more time in building your app.
"scripts": {
"start": "concurrently -k \"npm run dev\" \"npm run watch-css\"",
"dev": "lite-server",
"db": "json-server --watch db.json --port 3005",
"build-js": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js",
"lint": "lint jshint src/**/**.js",
"build-css": "node-sass --include-path scss scss/main.scss assets/main.css",
"watch-css": "nodemon -e scss -x \"npm run build-css\"",
"test": "mocha test",
"pretest": "npm run lint",
"posttest": "echo the test has been run!",
"bash": "Location of the bash/shell script file"
}
This boilerplate will take care of all the necessary things which we need during the development phase like:
npm run dev
- Bootstraps our app, opens it in the browser, reloads the browser whenever we make changes in the source.
build-js
- Minifies all our JavaScript files, which will be needed during production.
watch-css
- Nodemon is a utility that will monitor for any changes in your source and automatically restart your server. Here I have used it to monitor for any changes in the .scss
file, if there are changes, it will restart the server and build our CSS.
"scripts": {
"test": "echo I am test",
"pretest": "echo I run before test",
"posttest": "echo I run after test"
}
npm test
- It wraps the above three commands “pretest test posttest” and executes them in the order I have listed. Initially when you hit npm test
it will look for pretest
command. If it is there, it gets executed first, followed by test
and then posttest
. During the lookup, if it doesn’t find the pretest command it will directly execute the test command.The remaining commands I have explained in the previous section. You can also customize this boilerplate based on your needs.
I hope this article has saved your time while preparing a build tool. Now we have prepared our own boilerplate for npm as a build tool. I hope now you will accept npm has killed both Grunt and Gulp. Feel free to use my boilerplate and contributions are welcome. Further, you can refer to the official npm scripts.
If you have any queries, please let me know in the comments.
]]>I remember the old days when people had to register for an account separately on each website.
It was a boring and tedious process to repetitively enter the same information over and over again on each website’s registration page.
Times have changed and so has the way people use their preferred websites and services.
After the advent of the OAuth2 specification, it has become quite a trivial task to allow your users to sign in to your application using a third-party service.
Logging in through third-party services has become such an important option that if your application does not have it, it seems a bit outdated.
So, in this tutorial, we are going to learn how to allow your users to log in using their social media accounts.
During the course of this tutorial, you will learn.
This tutorial assumes you have configured Devise without third-party authentication and users are able to use your on-site Devise features. It is beyond the scope of this tutorial to demonstrate how to fully customize Devise and set up its on-site features. The repository for this tutorial includes the code you need to fully set up and customize Devise along with the code discussed as part of this tutorial.
Though it is a bit out of scope, however, to round things up nicely, let us have a look at how to create an application through the respective third-party websites.
Before we begin creating applications, there is a small bit regarding callback URL that we need to talk about as we will need it when registering an application.
Some of the third-party OAuth providers require that you specify a callback URL when you create an application.
The callback URL is used to redirect the user back to your application after they have granted permissions to your application and added it to their account.
Devise works by providing a callback URL for each provider that you use.
The callback URL follows the convention <your-application-base-url>/<devise-entity>/auth/<provider>/callback
where the provider is the gem name that is used to account for a specific third party login strategy.
For example, if my application is hosted at http://www.example.com
and I have created Devise for the users
entity whom I wish to allow to log in using their Twitter account, the callback URL, considering the gem name that provides the strategy is twitter
, would be http://www.example.com/users/auth/twitter/callback
.
We are going to confirm the callback routes later in this tutorial once we are done setting up the different providers.
Log in to your Facebook account and browse to the URL https://developers.facebook.com
.
I am assuming you have not registered for a Facebook developer account and have never created a Facebook application before.
Click the Register button at the top-right of the page.
Accept the Facebook developer agreement (in the modal dialog) by turning the switch to YES and clicking the Register button.
Click the Create App ID button that shows up in the same modal dialog.
Fill in the Display Name, and Contact Email fields and click the Create App ID button.
Once your application is created, you will be taken to the application settings page.
Choose Settings > Basic from the left menu.
Enter localhost
in the App Domains field.
Click the Add Platform button at the bottom of the page.
Choose Website as the platform.
Enter http://localhost:3000
in the Site URL field.
Click the Save Changes button at the bottom of the page.
Choose Dashboard from the left menu.
Note down the App ID, and App Secret shown on the page as they will be needed later.
Log in to your GitHub account.
Once you have logged in, click your account avatar at the top-right and choose Settings from the drop-down menu.
On the Settings page, choose Developer settings > OAuth applications from the left menu.
Click the Register a new application button.
Fill in the Application name, Homepage URL, and Application description fields.
Enter http://localhost:3000/users/auth/github/callback
in the Authorization callback URL field.
Click the Register application button.
Once your application is created, you will be taken to the application page.
Note down the Client ID, and Client Secret shown on the page as they will be needed later.
Log in to your Google account and browse to the URL https://console.developers.google.com/apis/library
.
On the Google developer console, choose Credentials from the left menu.
Click the Create credentials button and choose OAuth client ID from the menu that pops up.
For your Application type, choose Web application.
Fill in the Name field.
Under the Restrictions section, enter http://localhost:3000
in the Authorized JavaScript origins field.
Enter http://localhost:3000/users/auth/google_oauth2/callback
in the Authorized redirect URIs field and click the Create button.
Once your application is created, you will be shown the client ID, and client secret in a modal dialog.
Note down the client ID, and client secret shown in the modal dialog as they will be needed later.
Log in to your Twitter account and browse to the URL https://apps.twitter.com
.
On the Twitter apps page, click the Create New App button.
Fill in the Name, Description, and Website fields.
Enter http://localhost:3000/users/auth/twitter/callback
in the Callback URL field.
Accept the Developer Agreement and click the Create your Twitter application button.
On the application page, that is shown next, click the Settings tab.
Enter a mock URL in the Privacy Policy URL, and Terms of Service URL field and click the Update Settings button.
Click the Permissions tab and change the Access type to Read only.
Check the Request email addresses from users field under the Additional Permissions section and click the Update Settings button.
Click the Keys and Access Tokens tab.
Note down the Consumer Key (API Key), and Consumer Secret (API Secret) shown on the page as they will be needed later.
We are going to need a number of gems to make authentication through third-party providers work.
Apart from that, we are also going to add two additional gems.
The first one will help us store user sessions in the database while the second one will only be used in the development
environment to set environment variables.
The reason we will allow our application to save user sessions in the database is that there is a limit to how much data you can store in a session which is four kilo-bytes. Using a database as the session store will overcome this limitation.
As for using a gem to set environment variables in the development
environment, it is because we will be using a lot of third-party application information that needs to be kept secret.
Therefore, it is recommended to expose this information to our application as environment variables instead of adding it directly to a configuration file.
Open the file Gemfile
and add the following gems.
# Use Devise for authentication
gem 'devise', '~> 4.2'
# Use Omniauth Facebook plugin
gem 'omniauth-facebook', '~> 4.0'
# Use Omniauth GitHub plugin
gem 'omniauth-github', '~> 1.1', '>= 1.1.2'
# Use Omniauth Google plugin
gem 'omniauth-google-oauth2', '~> 0.4.1'
# Use Omniauth Twitter plugin
gem 'omniauth-twitter', '~> 1.2', '>= 1.2.1'
# Use ActiveRecord Sessions
gem 'activerecord-session_store', '~> 1.0'
We have started off by adding the Devise gem.
Devise gem supports integration with Omniauth which is a gem that standardizes third-party authentication for Rails applications.
Therefore, following the Devise gem, we have simply added the Omniauth strategies we need, namely, facebook
, github
, google-oauth2
, and twitter
.
Database sessions are facilitated by the activerecord-session_store
gem which has been added towards the bottom.
The last gem we need to add is the dotenv
gem.
However, since this gem will only be used in the development
environment, we need to add it to the development
group in the Gemfile
.
Open the Gemfile
, locate the group :development do
declaration, and append the following gem.
group :development do
.
.
.
# Use Dotenv for environment variables
gem 'dotenv', '~> 2.2.1'
end
All our gems have been added.
Execute the following command at the root of your project to install the added gems.
- bundle install --with development
We are done as far as the gems for our project are concerned.
The dotenv
gem we added earlier allows us to create a .env
file at the root of our project and set environment variables easily.
However, if you are using source control like Git, make sure the .env
file is ignored and not committed to your repository as it will contain confidential information.
You can, however, add a .env.example
file with placeholder data for the environment variables and commit it to the repository to show other developers on the project how information needs to be added to the .env
file.
Also recall, when creating the third-party applications, I instructed you to note down the respective client id and secret which we will be using here.
Create a .env
file at the root of your project and add the following code.
FACEBOOK_APP_ID=<facebook-app-id>
FACEBOOK_APP_SECRET=<facebook-app-secret>
GITHUB_APP_ID=<github-app-id>
GITHUB_APP_SECRET=<github-app-secret>
GOOGLE_APP_ID=<google-app-id>
GOOGLE_APP_SECRET=<google-app-secret>
TWITTER_APP_ID=<twitter-app-id>
TWITTER_APP_SECRET=<twitter-app-secret>
The <facebook-app-id>
, and <facebook-app-secret>
need to be replaced with your application id and secret.
Similarly, replace the remaining placeholders with the information provided to you by the respective third parties.
For our application configuration, we only need to touch a couple of areas, Devise and the session configuration.
Once we have added our provider application information as environment variables, we need to configure Devise to use it as part of the corresponding provider strategy.
Open the file config/initializers/devise.rb
and add the following code.
# ==> OmniAuth
# Add a new OmniAuth provider. Check the wiki for more information on setting
# up on your models and hooks.
config.omniauth :facebook, ENV['FACEBOOK_APP_ID'], ENV['FACEBOOK_APP_SECRET'], scope: 'public_profile,email'
config.omniauth :github, ENV['GITHUB_APP_ID'], ENV['GITHUB_APP_SECRET'], scope: 'user,public_repo'
config.omniauth :google_oauth2, ENV['GOOGLE_APP_ID'], ENV['GOOGLE_APP_SECRET'], scope: 'userinfo.email,userinfo.profile'
config.omniauth :twitter, ENV['TWITTER_APP_ID'], ENV['TWITTER_APP_SECRET']
You can use the comments in the above code snippet to locate the section of the configuration file where you need to add the Omniauth strategy settings.
The config.omniauth
method lets you add and configure an Omniauth strategy.
In our case, we have simply passed the name of the strategy, and the application id and secret using environment variables.
There is also an additional scope
parameter that has been added to some of the providers. It helps us specify the amount of control we wish to have over the authenticated user’s data.
The reason the scope
parameter is optional is some of the providers allow you to specify the scope when you create an application so there is no need to be explicit in such a case.
Also notice, the strategy names(facebook
, github
, google_oauth2
, and twitter
) are the same as the gem name for the respective strategy.
Open the file config/initializers/session_store.rb
and replace the Rails.application.config.session_store
directive with the following code, completely replacing the single line of code contained in the file.
Rails.application.config.session_store :active_record_store, key: '_devise-omniauth_session'
And we are done!
In order to allow our users to log in using third-party providers, we need to update the users
table, more generally, the entity table you have generated that Devise uses to authenticate users.
I am going to assume the Devise entity is a user but you can very well replace this entity name for your case.
We are also going to create a table to store user sessions.
Execute the following command at the root of your project to generate an updated users table migration.
- rails generate migration update_users
Open the file db/migrate/update_users.rb
and add the following code.
class UpdateUsers < ActiveRecord::Migration[5.0]
def change
add_column(:users, :provider, :string, limit: 50, null: false, default: '')
add_column(:users, :uid, :string, limit: 500, null: false, default: '')
end
end
The provider
and uid
fields help to identify a user uniquely as this pair will always have unique values.
For our case, the provider can be Facebook
, GitHub
, Google
, or Twitter
and the uid
will be the user id assigned to a user by any of these third parties.
Execute the following command at the root of your project to generate a create sessions table migration.
- rails generate migration create_sessions
Open the file db/migrate/create_sessions.rb
and add the following code.
class CreateSessions < ActiveRecord::Migration
def change
create_table :sessions do |t|
t.string :session_id, null: false
t.text :data
t.timestamps
end
add_index :sessions, :session_id, unique: true
add_index :sessions, :updated_at
end
end
Our sessions table stores the session id and data with timestamps.
We have also added an index to the session_id
and updated_at
fields respectively as it will help with searching user sessions when they return to our application.
Execute the following command at the root of your project to migrate the database.
- rails db:migrate
You may go ahead and browse the database to make sure the respective tables were created and updated.
We are going to add a method to our user
model that will create the user record in the database using the data provided by the third-party provider.
We also need to register the Omniauth strategies in our user
model so that they are picked up by Devise.
Again, your Devise entity may be different and so will be the model’s file name.
Open the file app/models/user.rb
and add the following code.
class User < ApplicationRecord
# Include default devise modules. Others available are:
# :confirmable, :lockable, :timeoutable and :omniauthable
devise :database_authenticatable, :registerable,
:recoverable, :rememberable, :trackable, :validatable,
:confirmable, :lockable, :timeoutable,
:omniauthable, omniauth_providers: [:facebook, :github, :google_oauth2, :twitter]
def self.create_from_provider_data(provider_data)
where(provider: provider_data.provider, uid: provider_data.uid).first_or_create do | user |
user.email = provider_data.info.email
user.password = Devise.friendly_token[0, 20]
user.skip_confirmation!
end
end
end
The omniauth_providers
array passed to the devise
method helps us register the Omniauth strategies.
The array contains symbolized names of the strategies. These names come from and should be the same as the gem name for the respective Omniauth strategy.
The create_from_provider_data
method is passed the data provided by the third party and is used to create the user in the database.
The user is first searched using the provider
string and user id(uid
) by the first_or_create
method.
The first_or_create
method would either fetch the user if it is found in the database or create it if it is not present.
Inside the first_or_create
block, we have simply set the user attributes from the provider data, which for our case is only the user’s email.
There are two parts worth mentioning inside the block.
The first one is the user.password = Devise.friendly_token[0, 20]
which sets an arbitrary password for the user since it is not exposed by the provider and is required to create a user.
The second one is the user.skip_confirmation!
declaration which skips the user email verification process since it has already been verified by the respective provider.
If you have added other fields to your Devise entity table such as
first_name
,last_name
, anddate of birth
, you can set these fields to the corresponding field values in the third party provider data.
What we need to work on next is to add the controller that will be handling the third-party redirects back to our application.
Execute the following command to generate an Omniauth callbacks controller.
- rails generate controller users/omniauth
I have appended users/
before the controller name to generate it under a directory same as the Devise entity.
You can change it based on your Devise entity or if you are using multiple Devise entities, you can altogether skip adding the controller under a separate directory by simply executing rails generate controller omniauth
.
It is a Devise convention to create a controller method named as the strategy that it will be handling the callback for so we will need to add four methods named facebook
, github
, google_oauth2
, and twitter
respectively to our Omniauth controller.
The controller actions that follow should be added to the app/controllers/users/omniauth_controller.rb
file that we have just created.
# facebook callback
def facebook
@user = User.create_from_provider_data(request.env['omniauth.auth'])
if @user.persisted?
sign_in_and_redirect @user
set_flash_message(:notice, :success, kind: 'Facebook') if is_navigational_format?
else
flash[:error] = 'There was a problem signing you in through Facebook. Please register or try signing in later.'
redirect_to new_user_registration_url
end
end
The user data provided by the third party is available to our application in the request environment variable request.env['omniauth.auth']
so we have passed it to the create_from_provider_data
method we created earlier.
If the user is saved to the database, we set a flash message using the set_flash_message
helper method provided by Devise, sign the user in and redirect them to their homepage.
In case the user is not saved to the database, a flash error message is set and the user is redirected to the registration page.
The code for the remaining provider callbacks is very similar, other than the flash message text.
# github callback
def github
@user = User.create_from_github_data(request.env['omniauth.auth'])
if @user.persisted?
sign_in_and_redirect @user
set_flash_message(:notice, :success, kind: 'GitHub') if is_navigational_format?
else
flash[:error] = 'There was a problem signing you in through GitHub. Please register or try signing in later.'
redirect_to new_user_registration_url
end
end
# google callback
def google_oauth2
@user = User.create_from_google_data(request.env['omniauth.auth'])
if @user.persisted?
sign_in_and_redirect @user
set_flash_message(:notice, :success, kind: 'Google') if is_navigational_format?
else
flash[:error] = 'There was a problem signing you in through Google. Please register or try signing in later.'
redirect_to new_user_registration_url
end
end
# twitter callback
def twitter
@user = User.create_from_twitter_data(request.env['omniauth.auth'])
if @user.persisted?
sign_in_and_redirect @user
set_flash_message(:notice, :success, kind: 'Twitter') if is_navigational_format?
else
flash[:error] = 'There was a problem signing you in through Twitter. Please register or try signing in later.'
redirect_to new_user_registration_url
end
end
Apart from the respective provider callbacks, we also need to add a failure callback which Devise will execute for all cases where authentication fails for some reason.
It could be that the redirection failed or the user did not grant permissions to your application.
Add the following failure callback, below the provider callbacks we added earlier.
def failure
flash[:error] = 'There was a problem signing you in. Please register or try signing in later.'
redirect_to new_user_registration_url
end
You might think that we need to add the appropriate links to redirect users to third-party applications to allow them to sign in to our application but this is taken care of by Devise.
Open the file app/views/devise/shared/_links.html.erb
and locate the following code snippet.
<%- if devise_mapping.omniauthable? %>
<%- resource_class.omniauth_providers.each do |provider| %>
<%= link_to "Sign in with #{OmniAuth::Utils.camelize(provider)}", omniauth_authorize_path(resource_name, provider) %><br />
<% end -%>
<% end -%>
The above code snippet checks your Omniauth setup and auto-generates the required links.
Since this shared view is rendered on the sessions/new
view, your users have the option to sign in using your configured providers.
Isn’t Devise a thing of beauty?
The last piece of the puzzle is to set up the application routes.
Throughout this post, I have assumed that you have an on-site Devise implementation configured and fully functional.
So, there is a possibility you may already have the following route added to your routes file.
However, what you need to focus on is the additional controllers
parameter which is used to specify the callbacks controller and will not be present in the route declaration that you have already added.
Rails.application.routes.draw do
.
.
.
.
devise_for :users, controllers: { omniauth_callbacks: 'users/omniauth' }
end
Once you have configured the routes, you can execute the following command to make sure the callback URLs were set up correctly.
- rails routes
Voila! we are all set up to test our application.
We have successfully added third-party login through Facebook, GitHub, Google, and Twitter to our application.
It is time to take it out for a test drive.
Recall that we are using the dotenv
gem in our development
environment so the command to execute our rails application changes slightly based on that since we also want to set the environment variables to be available to our application.
Execute the following command to start your rails application.
- dotenv rails server
Browse to Devise’s user login page and you should see the text “Sign in with…” for each of the providers we set up.
Here is a screenshot of how it looks with Devise’s primitive setup.
Go ahead and try signing in.
You will be taken to the third-party provider’s webpage where you will be prompted to grant your application access to the user’s data.
Once you have done that, you will be taken back to your application, to the user’s homepage, with a flash message notifying you of successful sign-in.
Here is a screenshot of when the Sign in with Facebook link is clicked through.
Adding a third-party login option to your application is a nice touch and further enhances your application.
Though we have targeted four of the most famous of the lot, you are free to get your hands dirty and try the others available.
The Omniauth gem’s wiki has a comprehensive list of the strategies available and you should probably get to playing around with them.
I hope you found this tutorial interesting and knowledgeable. Until my next piece, happy coding!
]]>By default, AngularJS will route URLs with a hashtag.
For example:
http://example.com/
http://example.com/#/about
http://example.com/#/contact
It is very easy to get clean URLs and remove the hashtag from the URL.
There are 2 things that need to be done.
$locationProvider
In Angular, the $location
service parses the URL in the address bar and makes changes to your application and vice versa.
I would highly recommend reading through the official Angular $location
docs to get a feel for the $location
service and what it provides.
We will use the $locationProvider
module and set html5Mode
to true
.
We will do this when defining your Angular application and configuring your routes.
angular.module('scotchy', [])
.config(function($routeProvider, $locationProvider) {
$routeProvider
.when('/', {
templateUrl : 'partials/home.html',
controller : mainController
})
.when('/about', {
templateUrl : 'partials/about.html',
controller : mainController
})
.when('/contact', {
templateUrl : 'partials/contact.html',
controller : mainController
});
// use the HTML5 History API
$locationProvider.html5Mode(true);
});
What is the HTML5 History API? It is a standardized way to manipulate the browser history using a script. This lets Angular change the routing and URLs of our pages without refreshing the page. For more information on this, here is a good HTML5 History API Article.
To link around your application using relative links, you will need to set a <base>
in the <head>
of your document.
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<base href="/">
</head>
There are plenty of other ways to configure this and the HTML5 mode set to true should automatically resolve relative links. This has just always worked for me. If the root of your application is different than the URL (for instance /my-base
), then use that as your base.
The $location
service will automatically fallback to the hashbang method for browsers that do not support the HTML5 History API.
This happens transparently to you and you won’t have to configure anything for it to work. From the Angular $location
docs, you can see the fallback method and how it works.
This is a simple way to get pretty URLs and remove the hashtag in your Angular application. Have fun making those super clean and super fast Angular apps!
]]>Not too long ago we took a look at some of the built-in Angular filters. The built-in filters cover many common use cases including formatting dates, currencies, limiting the number of items displayed, and more. These filters are both useful and give insights into how we may improve our workflow when building Angular apps.
Today, we will build our own custom AngularJS filters. We’ll start simple and build a couple of filters that manipulate numbers and strings, then we’ll build a filter that manipulates an entire data set. Finally, in our previous article where we discussed the built-in AngularJS filters, Pierre-Adrien asked how we could use the built-in Angular currency filter to display the currency denomination after the amount (i.e., “9.99$” instead of “$9.99”) as is common in some places of the world. Unfortunately, the built-in currency filter does not support this functionality, so we’ll build our own that does!
Angular exposes a simple API for creating a filter. Just as you would declare a controller with app.controller(‘myCtrl', function(){});
, you can create a new filter by appending .filter(‘filterName', function(){})
to your Angular app.
A filter is very similar to a factory or service in many regards but has the added advantage of behaving on a global scope once created. As we have previously seen, you can invoke a filter on both the data binding in your HTML or directly inside of your controller or directive by using the $filter
service. Let’s break down the structure of a filter.
// To declare a filter we pass in two parameters to app.filter
// The first parameter is the name of the filter
// second is a function that will return another function that does the actual work of the filter
app.filter('myFilter', function() {
// In the return function, we must pass in a single parameter which will be the data we will work on.
// We have the ability to support multiple other parameters that can be passed into the filter optionally
return function(input, optional1, optional2) {
var output;
// Do filter work here
return output;
}
});
This may seem confusing from the get-go, so let’s jump to some examples that will demystify writing custom filters.
Let’s start off slow and simple. The first custom filter we’ll write will convert numbers to their ordinal values, meaning that if we apply our ordinal filter to say the number 43, what will be displayed is “43rd”. Let’s look at the code for our ordinal filter.
// Setup the filter
app.filter('ordinal', function() {
// Create the return function
// set the required parameter name to **number**
return function(number) {
// Ensure that the passed in data is a number
if(isNaN(number) || number < 1) {
// If the data is not a number or is less than one (thus not having a cardinal value) return it unmodified.
return number;
} else {
// If the data we are applying the filter to is a number, perform the actions to check its ordinal suffix and apply it.
var lastDigit = number % 10;
if(lastDigit === 1) {
return number + 'st'
} else if(lastDigit === 2) {
return number + 'nd'
} else if (lastDigit === 3) {
return number + 'rd'
} else if (lastDigit > 3) {
return number + 'th'
}
}
}
});
Applying this filter to our views is straightforward:
{{ 25 | ordinal }}
will yield 25th. If we were to apply the ordinal filter to a string, such as
{{ 'not a number' | ordinal }}
we would simply get the string not a number back.
It is a good practice to ensure you have appropriate data to filter, and if you do not simply return the unmodified data back. Take a look at the CodePen below for some additional examples.
See the Pen AngularJS Custom Filter - Ordinal Numbers.
Sorry for the bad joke. The next custom filter we build will capitalize either the first letter or a letter we specify. The additional parameter will specify which letter to capitalize, if no additional parameter is passed then the first letter will be capitalized.
This is a bit of a contrived example and has no real practical uses but we’ll use it to show off how you could extend your filters.
// Setup the filter
app.filter('capitalize', function() {
// Create the return function and set the required parameter as well as an optional parameter
return function(input, char) {
if (isNaN(input)) {
// If the input data is not a number, perform the operations to capitalize the correct letter.
var char = char - 1 || 0;
var letter = input.charAt(char).toUpperCase();
var out = [];
for (var i = 0; i < input.length; i++) {
if (i == char) {
out.push(letter);
} else {
out.push(input[i]);
}
}
return out.join('');
} else {
return input;
}
}
});
Again, applying this filter is very simple. If we wanted to capitalize a specific letter, we could pass the optional parameter such as:
{{ 'onomatopoeia' | capitalize:3 }}
and this would return the result of
OutputonOmatopoeia
See the Pen AngularJS Custom Filter - Capitalize.
In the previous examples, we applied filters to single items, now let’s apply a filter to a collection. In this example, we will actually filter a data set.
In programming, there are hundreds of ways to reach the end goal, and in this example what we’ll filter a list and return only the items that match certain criteria.
We will go through a list of programming languages and display only the statically typed ones. Easy enough right?
// Setup the filter
app.filter('staticLanguage', function() {
// Create the return function and set the required parameter name to **input**
return function(input) {
var out = [];
// Using the angular.forEach method, go through the array of data and perform the operation of figuring out if the language is statically or dynamically typed.
angular.forEach(input, function(language) {
if (language.type === 'static') {
out.push(language)
}
})
return out;
}
});
See the Pen AngularJS Custom Filter - Static Language.
In the first example, we looked at creating a simple custom filter that only did one thing (and hopefully did that one thing well). Next, let’s take a look at how we can create a filter that accepts additional parameters.
The idea for this filter comes from Pierre Adrian who was wondering whether the built-in currency filter supports the ability to choose what side the currency symbol goes on. Unfortunately, it does not, so we’ll build our own custom currency filter that does!
In the US it is standard practice to place the $
symbol before the amount (i.e. $9.99), but in certain countries, it is customary to place the symbol after the amount (i.e 9.99$).
For our custom filter, we will allow the user to pass two parameters. The first will be the symbol or string they want to use to denote the currency, and the second a true
or false
boolean value that will determine whether the symbol is added before or after the amount.
We will default the symbol to the dollar sign ($
) and the position to before of the amount so that if those aren’t passed the filter still works.
// Setup the filter
app.filter('customCurrency', function() {
// Create the return function and set the required parameter name to **input**
// setup optional parameters for the currency symbol and location (left or right of the amount)
return function(input, symbol, place) {
// Ensure that we are working with a number
if(isNaN(input)) {
return input;
} else {
// Check if optional parameters are passed, if not, use the defaults
var symbol = symbol || '$';
var place = place === undefined ? true : place;
// Perform the operation to set the symbol in the right location
if( place === true) {
return symbol + input;
} else {
return input + symbol;
}
}
}
});
One thing to note when dealing with filters that support multiple parameters: you must pass the parameters in the correct order! You do not have to pass all the parameters, so in our custom currency filter it is perfectly acceptable to pass only the symbol, but you cannot only pass the location of where you want the symbol to display.
If you wanted to change only the order, you would still need to pass in the symbol such as {{ 25 | customCurrency:'$':false }}
.
See the Pen AngularJS Custom Filter - Custom Currency.
Today we built our own custom AngularJS filters. We learned how to create filters from scratch, built filters that did single tasks, and created filters that had extended functionality. Filters can be a powerful tool for extending the presentation of your applications. What are some custom filters you would like to see?
]]>The Visual Studio Code 1.32 February update is out now and with it comes some great new features for Vue users.
Let’s run through the list of features that are in the new 1.32 update:
font-size
, font-family
, and line-height
of the Debug Console.Big list of updates as usual! This is why VS Code has consistently been used by so many developers.
Every month of VS Code updates brings in some useful features but there’s one this month that really stood out to me.
The ability to have VS Code (through the Vetur Vue Plugin) provide IntelliSense and autocomplete. Here’s the example that the VS Code update logs give us:
Let’s say you have some data()
in your Vue component. Let’s say a message
variable.
If you wanted to use that message
variable in your template, you can start typing and see VS Code help you out!
This will work for the following:
data
in the current componentcomputed
propertiesmethods
props
for any child components: will show on v-bind
The other updates that came in are:
<template>
formatterThe full list of updates can be found on the Vetur changelog.
A big update for Vue developers that use VS Code and Vetur. Will definitely speed up my own Vue work. Great work to the VS Code team as always!
]]>These are the 3 tips I found pretty handy while working with TypeScript:
Though I discovered these while working with Angular applications, all tips are not Angular-specific, it’s just TypeScript.
I like interfaces. However, I don’t like to import them every time. Although Visual Studio Code has an auto-import feature, I don’t like my source files been “polluted” by multiple lines of imports - just for the purpose of strong typing.
This is how we do it normally.
// api.model.ts
export interface Customer {
id: number;
name: string;
}
export interface User {
id: number;
isActive: boolean;
}
// using the interfaces
import { Customer, User } from './api.model'; // this line will grow longer if there's more interfaces used
export class MyComponent {
cust: Customer;
}
By using namespace, we can eliminate the need to import interfaces files.
// api.model.ts
namespace ApiModel {
export interface Customer {
id: number;
name: string;
}
export interface User {
id: number;
isActive: boolean;
}
}
// using the interfaces
export class MyComponent {
cust: ApiModel.Customer;
}
Nice right? Using namespace also helps you to better organize and group the interfaces. Please note that you can split the namespace across many files.
Let’s say you have another file called api.v2.model.ts
. You add in new interfaces, but you want to use the same namespace.
// api.v2.model.ts
namespace ApiModel {
export interface Order {
id: number;
total: number;
}
}
You can definitely do so. To use the newly created interface, just use them as the previous example.
// using the interfaces with same namespaces but different files
export class MyComponent {
cust: ApiModel.Customer;
order: ApiModel.Order;
}
Here is the detail documentation on TypeScript namespacing.
The other way to eliminate import is to create a TypeScript file end with .d.ts
. “d” stands for declaration file in TypeScript (more explanation here).
// api.model.d.ts
// you don't need to export the interface in d file
interface Customer {
id: number;
name: string;
}
Use it as normal without the need to import it.
// using the interfaces of d file
export class MyComponent {
cust: Customer;
}
I recommend solution 1 over solution 2 because:
It’s quite common where you will use the same interface for CRUD. Let’s say you have a customer interface, during creation, all fields are mandatory, but during an update, all fields are optional. Do you need to create two interfaces to handle this scenario?
Here is the interface
// api.model.ts
export interface Customer {
id: number;
name: string;
age: number;
}
Partial
is a type to make properties an object optional. The declaration is included in the default d file lib.es5.d.ts
.
// lib.es5.d.ts
type Partial<T> = {
[P in keyof T]?: T[P];
};
How can we use that? Look at the code below:
// using the interface but make all fields optional
import { Customer } from './api.model';
export class MyComponent {
cust: Partial<Customer>; /
ngOninit() {
this.cust = { name: 'jane' }; // no error throw because all fields are optional
}
}
If you don’t find Partial
declaration, you may create a d file yourself (e.g. util.d.ts) and copy the code above into it.
For more advanced type usage of TypeScript, you can read here.
As a JavaScript-turned-TypeScript developer, one might find TypeScript error is annoying sometimes. In some scenarios, you just want to tell TypeScript, “Hey, I know what I am doing, please leave me alone.”.
@ts-ignore
commentFrom TypeScript version 2.6 onwards, you can do so by using comment @ts-ignore
to suppress errors.
For example, TypeScript will throw error “Unreachable code detected” in this following code:
if (false) {
console.log('x');
}
You can suppress that by using comment @ts-ignore
if (false) {
// @ts-ignore
console.log('x');
}
Find out more details here: TypeScript 2.6 release
Of course, I will suggest you always try to fix the error before ignoring it!
TypeScript is good for your (code) health. It has pretty decent documentation. I like the fact that they have comprehensive What's new
documentation for every release. It’s an open source project in GitHub if you would like to contribute. The longer I work with TypeScript, the more I love it and appreciate it.
That’s it, happy coding!
]]>A Web Developer’s need to learn never ends, so what better way to take in the latest technologies than listening to a podcast?! You can listen while you drive, clean the house, take a shower (no judgment…you do you!), or anything else. Here are the top 10 podcasts you should be listening to as a Web Developer!
Wes Bos and Scott Tolinkski are full-stack developers who create courses on Web Development. They decided to start Syntax which is by far my favorite dev podcast and releases two a week. They are incredibly entertaining, but also have an immense amount of knowledge to share around development, entrepreneurship, soft skills, and more!
https://blog.codepen.io/radio/
CodePen is used by Web Developers everywhere as a way to showcase and share demo code. That said, this podcast is actually run by CodePen employees. They share stories, learnings, struggles, and successes that they’ve come across in growing their small company. This perspective is a little more unique than the other podcasts because you can follow the success of CodePen as a whole while learning more details along the way.
http://www.fullstackradio.com/
Full Stack Radio is run by Adam Wathan who has created incredible content on Vue, Laravel, and Web Design. In each episode, Adam brings on a guest to discuss a variety of different topics covering anything from design to testing.
Shop Talk is hosted by Dave Rupert and Chris Coyier where they focus on front-end web design and development. Dave is a developer at a web design shop in Austin Texas, while Chris is one of the co-founders of CodePen.
Fun fact… Dave just recently joined CodePen radio for an episode!
JavaScript Jabber releases weekly podcasts discussing front-end and back-end development. As you can expect, it’s everything JavaScript…best practices, tools, testing, deployment, etc. They typically have a panel of 3-4 people which makes this podcast pretty conversational!
https://reactpodcast.simplecast.fm/
React is arguably the hottest front-end framework around, so why not a podcast dedicated to React? They recently hosted Laurie Voss, one of the co-founders of NPM, so obviously they are capable of getting some great guests on the show. If you’re looking for content that’s a little more targeted at React, check it out!
https://www.codenewbie.org/podcast
If you’re a new developer and looking for discussion that’s a little bit more introductory, Code Newbie is for you. They focus on the coding journeys of their guests, which makes the episodes very relatable! Again, if you’re new to programming there’s a lot to take away from these personal stories!
https://frontendhappyhour.com/
Front End Happy Hour features a panelist of engineers from Netflix, Evernote, and LinkedIn who focus on front-end development. Believe it or not, the hosts of Front End Happy Hour are actually talking and recording over drinks (hints at the name). Thanks to this, you might get a bit more raw commentary with this one.
https://realtalkjavascript.simplecast.fm/
The hots of Real Talk have a bit of a Microsoft background and have been significantly influential in the Angular community. That said though, they do cover an array of topics related to JavaScript. This podcast is relatively new (started in 2018), so don’t let the smaller numbers of subscribers scare you away. They are growing fast!
https://itunes.apple.com/us/podcast/all-javascript-podcasts-by-devchat-tv/id496893300?mt=2
This podcast is a little bit of a cheat because it’s really just a collection of other podcasts, including JavaScript Jabber, mentioned above. However, one of the other podcasts included, My JavaScript Story, has some great content to share as well! It seems like the other two that are included haven’t released an episode recently. Regardless, if you’re going to follow JavaScript Jabber, you might as well follow this one for some extra content!
Podcasts have seen a huge rise in the past few years. Which do you listen to? We can add more to the list! Just let us know!
]]>You may not know this - but AngularJS comes with many handy filters built-in. I see programmers reinventing the wheel and reimplementing functionality that already exists all the time. Sometimes this happens because you need to address a specific use case but more often than not, it’s simply because the programmer wasn’t aware that the functionality was already there.
In this article, I will go over the many filters that AngularJS provides out of the box. Most of these are documented in the Angular Docs but lack real-world examples, so I will approach this topic with a plethora of code samples and real-world uses.
Let’s jump right into it!
Filters, as the name implies, allow you to manipulate and filter the presentation of your views. You can apply Angular filters directly by extending the bindings in your HTML views such as:
{{ totalCost | currency }}
Filters can also be chained, by adding the pipe ( |
) character between each filter so if we wanted to apply multiple filters to a single expression it would look something like:
{{ totalCost | currency | filter2 | filter3 }}
Finally, filters can be extended even further by supporting arguments, for example:
{{ totalCost | currency:"USD$" }}
It is common practice to apply filters directly to the binding expressions in the HTML views, but you can also apply filters in your controllers and directives as well.
The syntax for applying filters in your JavaScript files will look like this:
$filter('number')(15, 5)
This filter is equivalent to {{ 15 | number:5 }}
and both will render the number 15 as a string to five decimal places (i.e., 15.00000) in your view.
It’s ok if you don’t fully grasp what we’re doing so far, we are just going over the syntax here - next we’ll walk through the built-in filters and how they can improve the presentation of your apps.
AngularJS comes with prebuilt filters for making a string upper or lower-case. The uppercase and lowercase filters will do what their name implies, either convert a string to all uppercase characters or convert the string to all lowercase characters.
The simplest way to apply this filter to an expression is to add it directly in the view. Please check out the Codepen below to see the Uppercase and Lowercase filters illustrated.
app.controller('limits', function($scope){
$scope.copy = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua."
$scope.yelling = "LOREM ISPUM DOLOR SIT AMET, CONSECTETUR ADIPISCING ELIT, SED DO EIUSMOD TEMPOR INCIDIUNT UT LABORE ET DOLORE MAGNA ALIQUA."
})
Uppercase:
{{ copy | uppercase }}
Lowercase:
{{ yelling | lowercase }}
See the Pen icDkb.
AngularJS also comes with a couple of very useful filters for dealing with numbers and currencies. For numbers, Angular comes with the ability to filter how to number is displayed in regards to the decimal representation and rounding of numbers.
In the Codepen below, we apply a couple of different number filters to display a whole number to four decimal places, we also use the numbers filter to round up a number to the nearest hundredth.
app.controller('numbers', function($scope){
$scope.defaultNumber = 50;
$scope.defaultNumberDecimals = 50.458;
})
Default Number Filter:
{{ defaultNumber | number }}
Number to Four Decimal Places:
{{ defaultNumber | number:4 }}
Round Number to Two Decimal Places:
{{ defaultNumberDecimals | number:2 }}
See the Pen KAzlr.
Part two of number manipulation deals with currencies and AngularJS filters really shine here. By default, if we apply the currency filter to a number, it will simply add a $
symbol before the number.
This works really well if you are writing an app that will only deal with dollars, but what if we wanted to localize the app to support a variety of currencies? We can simply expand the currency filter by passing in some parameters to the filter.
For example, if we wanted to display the currency in euros, our code would look like {{ totalCost | currency:'€' }}
. Additionally, in Angular 1.3+, the currency filter can further be expanded to support rounding up numbers to as many decimal places as you want - or none at all.
To specify the number of decimal places, you would again just pass another parameter to the currency filter such as {{ totalCost | currency:'$':4 }}
, and this would render the number as “$15.0000” if totalCost
was 15.
Check out the Codepen examples below to see this filter further illustrated.
app.controller('currencies', function($scope){
$scope.defaultNumber = 59.99;
$scope.defaultNumberWhole = 59;
})
Default Currency Filter:
{{ defaultNumber | currency }}
Currency Filter on Whole Number:
{{ defaultNumberWhole | currency }}
Custom Currency Filter:
{{ defaultNumber | currency:'$COTCHES' }}
Custom Currency Filter with Decimal Point Control (Angular 1.3+):
{{ defaultNumber | currency:'£':0 }}
See the Pen eiGmD.
The Date and Time filters that come with Angular are amazing. The Date and Time filters will take any standard ISO 8601 date/time string and parse it into hundreds of different ways to meet your every need.
Whether you need the full date written as Thursday, October 19, 2014, or just the year, day, month or any combination you can think of such as day first, then the actual day name, followed by the last two digits of the year and concluded by the month in shorthand notation (i.e., Nov for November), AngularJS will do it for you.
Parsing and manipulating date/time is often difficult and time-consuming but the AngularJS filters make it a breeze. Like I said earlier, there are many different filters you can apply here, and I could write multiple articles on all the different ways you can apply these filters, but in the interest of time, I will direct you to the AngularJS docs that describe all the different parameters you can pass into the date filter and include a CodePen that shows off some of this functionality. Read more about the date filter here.
app.controller('dates', function($scope){
$scope.dateCommon = "2014-10-19";
$scope.dateUTC = "2014-10-19T06:46:00+00:00";
})
Default Date Filter:
{{ dateCommon | date }}
Full Date Filter:
{{ dateCommon | date:'fullDate' }}
Year Only Filter:
{{ dateCommon | date:'yyyy' }}
Custom Date Filter:
{{ dateUTC | date:"'Year:' yyyy, 'Month:' MMM, 'Day:' EEEE" }}
See the Pen gnHjr.
The JSON built-in filter in Angular converts a JSON string and prettifies it by including indentation so that the JSON is much easier to read. There really isn’t anything else to say about this filter, it doesn’t allow for any additional parameters to be passed into it, it simply converts an object to easily readable JSON.
Check out the simple CodePen below to see this illustrated.
app.controller('json', function($scope){
$scope.userInfo = {"name": "Bob", "email": "bob@inbox", "password":"youshallnotpass", "activities": ["jogging", "swimming", "boxing"], "eligibility": true, "hasActiveDevice": false}
})
JSON Filter:
{{ userInfo | json }}
See the Pen vcIuB.
The limitTo
filter, as its name implies, allows you to limit some string or array to a certain length. For example, applying a limitTo:10
filter to a string that contains 15 characters, would only display the first 10 characters of that string.
limitTo
can also be applied to arrays and can be very powerful and intuitive when used in conjunction with ng-repeat. Combining limitTo
and ng-repeat
, you could very easily build a pagination system for your app for example.
One common use case where the limitTo
filter can come in handy is preview text. Say you’re building the front page for your blog in AngularJS and want to show a preview of the first 250 characters of each blog post. This can easily be accomplished by the following code {{ previewCopy | limitTo: 250 }}
.
Check out the CodePen below to see how we apply the ellipses (...
) to the end if the previewCopy
is over the character limit.
app.controller('limits', function($scope){
$scope.copy = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."
$scope.list = ['@scotch_io', '@sevilayha', '@kukicadnan', '@hollylawly', '@nickforthought', '@kenwheeler', '@mathiashansen']
})
LimitTo Filter Applied to a String:
{{ copy | limitTo:150 }}
LimitTo Filter Applied to an Array:
<ul>
<li ng-repeat="person in list | limitTo:4"> {{person}} </li>
</ul>
See the Pen sFoAH.
We’ve gone over all of the built-in AngularJS filters. The built-in filters provide a variety of functionality from simple uppercasing of a string to complex manipulation of dates.
We went over the different ways to apply filters, the most common being by applying the filter directly in the binding, but I’ve also shown you how to apply the filter through JavaScript.
]]>Warning: For the latest information, refer to the documentation for creating 1-Click NodeJS Droplets on DigitalOcean.
There are various platforms that help with deploying Node.js apps to production.
In this tutorial, we’ll be looking at how to deploy a Node.js app to DigitalOcean. DigitalOcean compared to these other platforms is cheaper and you also can log on to your server and configure it however you like.
You get more control over your deployment and also it’s a great experiment to see exactly how Node apps are deployed to production.
This tutorial assumes the following:
Let’s quickly build a sample app that we’ll use for the purpose of this tutorial. It going to be a pretty simple app.
- // create a new directory
- mkdir sample-nodejs-app
-
- // change to new directory
- cd sample-nodejs-app
-
- // Initialize npm
- npm init -y
-
- // install express
- npm install express
-
- // create an index.js file
- touch index.js
Open index.js
and paste the code below into it:
// index.js
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('Hey, I\'m a Node.js app!')
})
app.listen(3000, () => {
console.log('Server is up on 3000')
})
You can start the app with:
- node index.js
And access it on http://localhost:3000
.
You should get:
OutputHey, I'm a Node.js app!
The complete code is available on GitHub.
Now let’s take our awesome app to production.
Login to your DigitalOcean account and create a new droplet (server). We’ll be going to with One-click apps. Select Node.js as shown below:
Next, we’ll choose the $10 plan. Though the task list app will work perfectly on the $5 plan, but we won’t be able to install the npm dependencies because the npm requires at least 1GB RAM for installing dependencies. Though there is a way around this by creating swap memory which is beyond the scope of this tutorial.
Next, select a datacenter region, we’ll go with the default:
Next, add a new SSH key or choose from the existing ones that you have added. You can get your SSH key by running the command below on your local computer:
- cat ~/.ssh/id_rsa.pub
The command above will print your SSH key on the terminal, which you can then copy and paste in the SSH Key Content field. Also, give your SSH key a name.
Finally, choose a hostname for the droplet and click the Create button.
After a couple of seconds, you’ll have your new server up and running on Ubuntu 16.04 and NodeJS version 6.11.2. Note the IP address of the server as we’ll be using it to access the server.
Before we start configuring the server for the task app, let’s quickly create a non-root user which we’ll use henceforth for the rest of the tutorial.
Note: As a security measure, it is recommended to carry out tasks on your server as a non-root user with administrative privileges.
First, we need to log in to the server as root. We can do that using the server’s IP address:
- ssh root@SERVER_IP_ADDRESS
Once we are logged in to the server, we can move on to create a new user:
- adduser mezie
This will create a new user called mezie, you can name the user whatever you like. You will be asked a few questions, starting with the account password.
Having created the new user, we need to give it administrative privileges. That is, the user will be able to carry out administrative tasks by using sudo
command.
- usermod -aG sudo mezie
The command above adds the user mezie to sudo
group.
Now the user can run commands with superuser privileges.
You need to copy your public key to your new server. Enter the command below on your local computer:
- cat ~/.ssh/id_rsa.pub
This will print your SSH key to the terminal, which you can then copy.
For the new user to log in to the server with SSH key, we must add the public key to a special file in the user’s home directory.
Still logged in as root on the server, enter the following command:
- su - mezie
This will temporarily switch to the new user. Now you’ll be in your new user’s home directory.
Next, we need to create a new directory called .ssh
and restrict its permission:
- mkdir ~/.ssh
- chmod 700 ~/.ssh
Next, within the .ssh
directory, create a new file called authorized_keys
:
- touch ~/.ssh/authorized_keys
Next, open the file with vim
:
- vim ~/.ssh/authorized_keys
Next, paste your public key (copied above) into the file. To save the file, hit ESC
to stop editing, then :wq
and press ENTER
.
Next, restrict the permissions of the authorized_keys
file with this command:
- chmod 600 ~/.ssh/authorized_keys
Type the command below to return to the root user:
- exit
Now your public key is installed, and you can use SSH keys to log in as your user.
To make sure you can log in as the new user with SSH. Enter the command below in a new terminal on your local computer:
- ssh mezie@SERVER_IP_ADDRESS
If all went well, you’ll be logged in to the server as the new user with SSH.
The rest of the tutorial assumes you are logged in to the server with the new user created (mezie in my case).
We are going to clone the app unto the server directly in the user’s home directory (that is, /home/mezie
in my case):
- git clone https://github.com/ammezie/sample-nodejs-app.git
Next, we install the dependencies:
- cd sample-nodejs-app
- npm install
Once the dependencies are installed we can test the app to make sure everything is working as expected. We’ll do so with:
- node index.js
The app is listening on port 3000
and can be accessed at http://localhost:3000
. To test the app is actually working, open a new terminal (still on the server) and enter the command below:
- curl http://localhost:3000
You should get an output as below:
- Hey, I'm a Node.js app!
Good! The app is up and running fine. But whenever the app crashes we’ll need to manually start the app again which is not a recommended approach. So, we need a process manager to help us with starting the app and restarting it whenever it crashes. We’ll use PM2 for this.
We’ll install it globally through npm:
- sudo npm install -g pm2
With PM2 installed, we can start the app with it:
- pm2 start index.js
Once the app is started you will get an output from PM2 indicating the app has started.
To launch PM2 on system startup or reboot, enter the command below:
- pm2 startup systemd
You’ll get the following output:
- [PM2] Init System found: systemd
- [PM2] To setup the Startup Script, copy/paste the following command:
- sudo env PATH=$PATH:/usr/local/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u mezie --hp /home/mezie
Copy and run the last command from the output above:
- sudo env PATH=$PATH:/usr/local/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u mezie --hp /home/mezie
Now PM2 will start at boot up.
Next, we’ll install Nginx as the webserver to be used for reverse proxy which will allow us to access the app directly with an IP address or domain instead of tacking port to the IP address.
- sudo apt-get update
- sudo apt-get install nginx
Because we chose 1-Click Apps while creating our Droplet, ufw
firewall is set up for us and running. Now, we need to open the firewall for only HTTP since we are not concerned with SSL in this tutorial:
- sudo ufw allow 'Nginx HTTP'
Finally, we set up Nginx as a reverse proxy server. To this, run:
- sudo vim /etc/nginx/sites-available/default
Within the server
block you should have an existing location /
block. Replace the contents of that block with the following configuration:
// /etc/nginx/sites-available/default
...
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
Save and exit vim
.
Test to make sure there are no syntax errors in the configuration by running:
- sudo nginx -t
Then restart Nginx:
- sudo systemctl restart nginx
Now you should be able to access the app with your IP_ADDRESS
. You should get something similar to the image below:
In this tutorial, we have seen how to deploy a Node.js app to DigitalOcean. We also saw how to setup a reverse proxy server with Nginx.
]]>PHP Exceptions are thrown when an unprecedented event or error occurs. As a rule of thumb, an exception should not be used to control the application logic such as if-statements and should be a subclass of the Exception
class.
Being unprecedented, an exception can be thrown at any point or time of our application.
Laravel provides a convenient exception handler class that checks for all exceptions thrown in a Laravel application and gives relevant responses. This is made possible by the fact that all Exceptions used in Laravel extend the Exception class.
One main advantage of having all exceptions caught by a single class is that we are able to create custom exception handlers that return different response messages depending on the exception.
In this tutorial, we will look at how to create a custom exception handler in Laravel 5.2 and how to return a 404 page depending on the Exception.
In Laravel 5.2, all errors and exceptions, both custom and default, are handled by the Handler
class in app/Exceptions/Handler.php
with the help of two methods.
report()
The report method enables you to log raised exceptions or parse them to error logging engines such as bugsnag or sentry which we will not delve into in this tutorial.
render()
The render method responds with an error message raised by an exception. It generates an HTTP response from the exception and sends it back to the browser.
/**
* Render an exception into an HTTP response.
*
* @param \Illuminate\Http\Request $request
* @param \Exception $e
* @return \Illuminate\Http\Response
*/
public function render($request, Exception $e)
{
return parent::render($request, $e);
}
We can however override the default error handling with our own custom exception handler.
/**
* @param \Illuminate\Http\Request $request
* @param \Exception $e
* @return \Illuminate\Http\Response
*/
public function render($request, Exception $e)
{
if ($e instanceof CustomException) {
return response()->view('errors.custom', [], 500);
}
return parent::render($request, $e);
}
Under the hood, Laravel does its own handling checks to determine the best possible response for an exception. Taking a look at the parent class (Illuminate\Foundation\Exceptions\Handler
), the render method generates a different response depending on the thrown Exception.
/**
* Render an exception into a response.
*
* @param \Illuminate\Http\Request $request
* @param \Exception $e
* @return \Symfony\Component\HttpFoundation\Response
*/
public function render($request, Exception $e)
{
if ($e instanceof HttpResponseException) {
return $e->getResponse();
} elseif ($e instanceof ModelNotFoundException) {
$e = new NotFoundHttpException($e->getMessage(), $e);
} elseif ($e instanceof AuthenticationException) {
return $this->unauthenticated($request, $e);
} elseif ($e instanceof AuthorizationException) {
$e = new HttpException(403, $e->getMessage());
} elseif ($e instanceof ValidationException && $e->getResponse()) {
return $e->getResponse();
}
if ($this->isHttpException($e)) {
return $this->toIlluminateResponse($this->renderHttpException($e), $e);
} else {
return $this->toIlluminateResponse($this->convertExceptionToResponse($e), $e);
}
}
In this section, we will create an inbuilt Laravel error by intentionally raising an exception.
To do this, we will try to fetch records that do not exist from a model using the firstOrFail()
Eloquent method.
Go ahead and set up a simple SQLite database. Luckily, Laravel ships with a User
model and a corresponding users table. Simply do the following.
.env
file to have DB_CONNECTION to be sqlite and the only database parameter.config/database.php
php artisan migrate
on the route of your Laravel project. This will set up a users table in the database.We will then add a route and a controller to get the first user in our users table who just so happens not to exist.
app/Http/routes.php
Route::get('/user', [
'uses' => 'SampleController@findUser',
'as' => 'user'
]);
App/Http/Controllers/SampleController.php
/**
* Return the first user in the users table
*
* @return Array User details
*/
public function findUser()
{
$user = User::firstOrFail();
return $user->toArray();
}
Running this on the browser will return a ModelNotFoundException
error response.
With this exception, we can now add a custom handler that returns our own error message.
We will modify the render
method in app/Exceptions/Handler.php
to return a JSON response for an ajax request or a view for a normal request if the exception is one of ModelNotFoundException
or NotFoundHttpException
.
If it is neither of the two, we will let Laravel handle the exception.
/**
* Render an exception into an HTTP response.
*
* @param \Illuminate\Http\Request $request
* @param \Exception $e
* @return \Illuminate\Http\Response
*/
public function render($request, Exception $e)
{
//check if exception is an instance of ModelNotFoundException.
if ($e instanceof ModelNotFoundException) {
// ajax 404 json feedback
if ($request->ajax()) {
return response()->json(['error' => 'Not Found'], 404);
}
// normal 404 view page feedback
return response()->view('errors.missing', [], 404);
}
return parent::render($request, $e);
}
Add a 404.blade.php
file in resources/view/errors to contain our user feedback.
<!DOCTYPE html>
<html>
<head>
<title>User not found.</title>
</head>
<body>
<p>You broke the balance of the internet</p>
</body>
</html>
If we now refresh the page, we have the following message on our view with a 404 status code.
When a user visits an undefined route such as /foo/bar/randomstr1ng, a NotFoundHttpException
exception, which comes as part of the Symfony package, is thrown.
To handle this exception, we will add a second condition in the render
method we modified earlier and return a message from resources/view/errors/missing.blade.php
/**
* Render an exception into an HTTP response.
*
* @param \Illuminate\Http\Request $request
* @param \Exception $e
* @return \Illuminate\Http\Response
*/
public function render($request, Exception $e)
{
//check if exception is an instance of ModelNotFoundException.
//or NotFoundHttpException
if ($e instanceof ModelNotFoundException or $e instanceof NotFoundHttpException) {
// ajax 404 json feedback
if ($request->ajax()) {
return response()->json(['error' => 'Not Found'], 404);
}
// normal 404 view page feedback
return response()->view('errors.missing', [], 404);
}
return parent::render($request, $e);
}
Just like we did in the previous section, Laravel 5.2 makes it all too easy to create custom error pages based on the exception that was thrown.
We can also simply generate a 404 error page response by calling the abort
method which takes an optional response message.
abort(404, 'The resource you are looking for could not be found');
This will check for a corresponding resources/view/errors/404.blade.php
and serve an HTTP response with the 404 status code back to the browser. The same applies to 401 and 500 error status codes.
Depending on an application’s environment, you may want to show varying levels of error details. You can set the APP_DEBUG
value in config/app.php
to either true or false by changing it in your .env
file.
In most cases, you may not want your users in production to see detailed error messages. It is therefore good practice to set APP_DEBUG
value to false while in a production environment.
In the early history of the JavaScript language, a cloud of animosity formed over the lack of a proper syntax for defining classes like in most object-oriented languages. It wasn’t until the ES6
spec release in 2015 that the class
keyword was introduced; it is described as syntactical sugar over the existing prototype-based inheritance.
At its most basic level, the class
keyword in ES6
is equivalent to a constructor function definition that conforms to prototype-based inheritance. It may seem redundant that a new keyword was introduced to wrap an already existing feature but it leads to readable code and lays the foundation upon which future object-oriented features can be built.
Before ES6
, if we had to create a blueprint (class) for creating many objects of the same type, we’d use a constructor function like this:
function Animal(name, fierce) {
Object.defineProperty(this, 'name', {
get: function() { return name; }
});
Object.defineProperty(this, 'fierce', {
get: function() { return fierce; }
});
}
Animal.prototype.toString = function() {
return 'A' + ' ' + (this.fierce ? 'fierce' : 'tame') + ' ' + this.name;
}
This is a simple object constructor that represents a blueprint for creating instances of the Animal
class. We have defined two read-only own
properties and a custom toString
method on the constructor function. We can now create an Animal
instance with the new
keyword:
var Lion = new Animal('Lion', true);
console.log(Lion.toString()); // "A fierce Lion"
Great! It works as expected. We can rewrite the code using the ES6
class for a concise version:
class Animal {
constructor(name, fierce) {
this._name = name;
this._fierce = fierce;
}
get name() {
return this._name;
}
get fierce() {
return `This animal is ${ this._fierce ? 'fierce' : 'tame' }`;
}
toString() {
return `This is a ${ this._fierce ? 'fierce' : 'tame' } ${this._name}`;
}
}
Let’s create an instance of the Animal
class with the new keyword as we did before:
let Lion = new Animal('Lion', true);
console.log(Lion.fierce);
console.log(Lion.toString())
Defining classes in ES6
is very straightforward and feels more natural in an object-oriented sense than the previous simulation using object constructors. Let’s take an in-depth look at the ES6
class by exploring some of its attributes and ramifications.
Making a transition from using the older object constructors to the newer ES6
classes shouldn’t be difficult at all since the class
keyword is just a special function
and exhibits expected function behavior. For example, just like a function, a class
can be defined by either a declaration or an expression, where the latter can be named or unnamed.
A class declaration is defined with the class
keyword and followed by the name of the class.
class Animal {}
We already used the class
declaration when we wrote the ES6
version of the Animal
constructor function :
class Animal {
constructor(name, fierce) {
this._name = name;
this._fierce = fierce;
}
}
A class expression allows for a bit more flexibility; a class may be named or unnamed, however, when a class expression is named, the name attribute becomes a local property on the class’ body and can be accessed using the .name
property.
An unnamed class expression skips the name after the class
keyword:
// unnamed
let animal = class {}
A named class expression, on the other hand, includes the name:
// named
let animal = class Animal {}
When comparing the object constructor to the
ES6 class
, it is worthy of note that, unlike the object constructor that can be accessed before its scope is defined because of hoisting, theclass
can’t and isn’t hoisted.While this may seem like a major limitation on
ES6
classes, it doesn’t have to be; goodES6
practice demands that if any function must mutate an instance of a class then it can be defined anywhere in the program but should be invoked only after theclass
itself has been defined.
After defining a class using any of the two stated methods, the curly brackets {}
should hold class members, such as instance variables, methods, or constructor; the code within the curly brackets make up the body of the class.
A class’ constructor is simply a method whose purpose is to initialize an instance of that class. This means that whenever an instance of a class is created, the constructor (where it is defined) of the class is invoked to do something on that instance; it could maybe initialize the object’s properties with received parameters or default values when the former isn’t available.
There can only be a single constructor method associated with a class so be careful not to define multiple constructor methods as this would result in a SyntaxError. The super keyword can be used within an object’s constructor to call the constructor of its superclass.
class Animal {
constructor(name, fierce) { // there can only be one constructor method
this._name = name;
this._fierce = fierce;
}
}
The code within the body of a class is executed in strict mode.
The body of a class usually comprises instance variables to define the state of an instance of the class, and prototype methods to define the behavior of an instance of that class. Before ES6
, if we needed to define a method on a constructor function, we could do it like this:
function Animal(name, fierce) {
Object.defineProperty(this, 'name', {
get: function() { return name; }
});
Object.defineProperty(this, 'fierce', {
get: function() { return fierce; }
});
}
Animal.prototype.toString = function() {
return 'A' + ' ' + (this.fierce ? 'fierce' : 'tame') + ' ' + this.name;
}
Or
function Animal(name, fierce) {
Object.defineProperty(this, 'name', {
get: function() { return name; }
});
Object.defineProperty(this, 'fierce', {
get: function() { return fierce; }
});
this.toString = function() {
return 'A' + ' ' + (this.fierce ? 'fierce' : 'tame') + ' ' + this.name;
}
}
The two different methods we defined above are referred to as prototype methods and can be invoked by an instance of a class. In ES6, we can define two types of methods: prototype and static methods. Defining a prototype method in ES6 is quite similar to what we have above, except that the syntax is cleaner (we don’t include the prototype property) and more readable:
class Animal {
constructor(name, fierce) {
this._name = name;
this._fierce = fierce;
}
get name() {
return this._name;
}
get fierce() {
return ` This animal is ${ this._fierce ? 'fierce' : 'tame' }`;
}
toString() {
return `This is a ${ this._fierce ? 'fierce' : 'tame' } ${this._name}`;
}
}
Here we first define two getter
methods using a shorter syntax, then we create a toString
method that basically checks to see if an instance of the Animal
class is a fierce or tame animal. These methods can be invoked by any instance of the Animal
class but not by the class itself.
ES6
prototype methods can be inherited by children classes to simulate an object-oriented behavior in JavaScript but under the hood, the inheritance feature is simply a function of the existing prototype chain and we’d look into this very soon.
All
ES6
methods cannot work as constructors and will throw a TypeError if invoked with the new keyword.
Static methods resemble prototype methods in the fact that they define the behavior of the invoking object but differ from their prototype counterparts as they cannot be invoked by an instance of a class. A static method can only be invoked by a class; an attempt to invoke a static method with an instance of a class would result in unexpected behavior.
A static method must be defined with the static keyword. In most cases, static methods are used as utility functions on classes.
Let’s define a static utility method on the Animal
class that simply returns a list of animals:
class Animal {
constructor(name, fierce){
this._name = name;
this._fierce = fierce;
}
static animalExamples() {
return `Some examples of animals are Lion, Elephant, Sheep, Rhinoceros, etc.`
}
}
Now, we can call the animalExamples()
method on the class itself:
console.log(Animal.animalExamples()); // "Some examples of animals are Lion, Elephant, Sheep, Rhinoceros, etc."
In object-oriented programming, it’s good practice to create a base class that holds some generic methods and attributes, then create other more specific classes that inherit these generic methods from the base class, and so on. In ES5
we relied on the prototype chain to simulate this behavior and the syntax would sometimes become messy.
ES6
introduced the somewhat familiar extends
keyword that makes inheritance very easy. A subclass can easily inherit attributes from a base class like this:
class Animal {
constructor(name, fierce) {
this._name = name;
this._fierce = fierce;
}
get name() {
return this._name;
}
get fierce() {
return `This animal is ${ this._fierce ? 'fierce' : 'tame' }`;
}
toString() {
return `This is a ${ this._fierce ? 'fierce' : 'tame' } ${this._name}`;
}
}
class Felidae extends Animal {
constructor(name, fierce, family) {
super(name, fierce);
this._family = family;
}
family() {
return `A ${this._name} is an animal of the ${this._family} subfamily under the ${Felidae.name} family`;
}
}
We have created a subclass here — Felidae (colloquially referred to as “cats”) — and it inherits the methods on the Animal
class. We make use of the super
keyword within the constructor method of the Felidae
class to invoke the super class’ (base class) constructor. Awesome, let’s try creating an instance of the Felidae
class and invoking and own method and an inherited method:
var Tiger = new Felidae('Tiger', true, 'Pantherinae');
console.log(Tiger.toString()); // "This is a fierce Tiger"
console.log(Tiger.family()); // "A Tiger is an animal of the Pantherinae subfamily under the Felidae family"
If a constructor is present within a sub-class, it needs to invoke
super()
before using “this”.It is also possible to use the
extends
keyword to extend a function-based “class”, but an attempt to extend a class solely created from object literals will result in an error.
At the beginning of this article, we saw that most of the new keywords in ES6
are merely syntactical sugar over the existing prototype-based inheritance. Let’s now take a look under the sheets and see how the prototype chain works.
While it’s nice to define classes and perform inheritance with the new ES6
keywords, it’s even nicer to understand how things work at the canonical level. Let’s take a look at JavaScript objects: All JavaScript objects have a private property that points to a second object (except in a few rare cases where it points to null
) associated with them, this second object is called the prototype
.
The first object inherits properties from the prototype
object and the prototype may, in turn, inherit some properties from its own prototype and it goes on like that until the last prototype on the chain has its prototype property equal to null
.
All JavaScript objects created by assigning an identifier the value of object literals share the same prototype object. This means that their private prototype property points to the same object in the prototype chain and hence inherits its properties. This object can be referred to in JavaScript code as Object.prototype
.
Objects created by invoking a class’ constructor or constructor function initialize their prototype from the prototype property of the constructor function. In other words, when a new object is created by invoking new Object()
, that object’s prototype becomes Object.prototype
just like any object created from object literals. Similarly, a new Date()
object will inherit from Date.prototype()
and a new Number()
from Number.prototype()
.
Nearly all objects in JavaScript are instances of
Object
which sits on the top of a prototype chain.
We have seen that it’s normal for JavaScript objects to inherit properties from another object (prototype) but the Object.prototype
exhibits a rare behavior where it does not have any prototype and does not inherit any properties (it sits on the top of a prototype chain) from another object.
Nearly all of JavaScript’s built-in constructors inherit from Object.prototype
, so we can say that Number.prototype
inherits properties from Object.prototype
. The effect of this relationship : creating an instance of Number
in JavaScript (using new Number()
) will inherit properties from both Number.prototype
and Object.prototype
and that is the prototype chain.
JavaScript objects can be thought of as containers since they hold the properties defined on them and these properties are referred to as " own properties" but they are not limited to just their own properties. The prototype chain plays a big role when a property is being sought on an object:
Let’s write some code to clearly simulate the behavior of prototypal inheritance in JavaScript.
We will be using the ES5
method — Object.create()
— for this example so let’s define it:
Object.create()
is a method that creates a new object, using its first argument as the prototype of that object.
let Animals = {}; // Animal inherits object methods from Object.prototype.
Animals.eat = true; // Animal has an own property - eat (all Animals eat).
let Cat = Object.create(Animals); // Cat inherits properties from Animal and Object.prototype.
Cat.sound = true; // Cat has its own property - sound (the animals under the cat family make sounds).
let Lion = Object.create(Cat); // Lion (a prestigious cat) inherits properties from Cat, Animal, and Object.prototype.
Lion.roar = true; // Lion has its own property - roar (Lions can roar)
console.log(Lion.roar); // true - This is an "own property".
console.log(Lion.sound); // true - Lion inherits sound from the Cat object.
console.log(Lion.eat); // true - Lion inherits eat from the Animal object.
console.log(Lion.toString()); // "[object Object]" - Lion inherits toString method from Object.prototype.
Here’s a verbal interpretation of what we did above:
Animal
object and it inherits properties from Object.prototype
Animal``'``s
own property — eat — to true (all Animals eat)Animal
(Therefore Cat inherits properties from Animal
and Object.prototype
)Cat's
own property — sound — to true (the animals under the cat family make sounds)Cat
(Therefore Lion inherits properties from Cat
, Animal
and Object.prototype
)Lion's
own property — roar — to true (Lions can roar)Lion
object and they all returned the right values by first seeking for the properties on the Lion
object then moving on to the prototypes (and prototypes of prototypes) where it wasn’t available on the former.This is a basic but accurate simulation of the prototypal inheritance in JavaScript using the prototype chain.
In this article, we have gone through the basics of ES6 classes and prototypal inheritance. Hopefully, you have learned a thing or two from reading the article. If you have any questions, leave them below in the comments section.
]]>Markdown is a popular text format written in an easy-to-read way and is convertible to HTML. It is a markup format that has been popularized by sites such as GitHub and Stack Overflow.
Today we will be building an app that lets us view the raw Markdown on the left side and the rendered HTML markup on the right side. We will also allow multiple people to work on the same Markdown document at the same time via a shareable URL and all changes will be saved.
Let’s get started on our real-time Markdown viewer app. We will be creating our backend in Node for this application. Create a project directory and then from the command line run the following:
- npm init
This will prompt us with several questions. You can fill in the prompts accordingly. This will create a package.json
file. Here is my sample package.json
file.
{
"name": "RealtimeMarkdownViewer",
"description": "Realtime Markdown Viewer",
"main": "server.js",
"version": "1.0.0",
"repository": {
"type": "git",
"url": "git@github.com:sifxtreme/realtime-markdown.git"
},
"keywords": [
"markdown",
"realtime",
"sharejs"
],
"author": "Asif Ahmed",
"dependencies": {
"express": "^4.12.4",
"ejs": "^2.3.1",
"redis": "^0.10.3",
"share": "0.6.3"
},
"engines": {
"node": "0.10.x",
"npm": "1.3.x"
}
}
Now let’s create a server.js
file in our root directory. This will be the main server file. We will be using Express as our web application framework. Using Express makes building a server simpler. With Express we will be using EJS for our view templates. To install Express and EJS run the following commands:
- node install --save express
- node install --save ejs
Also, create a views
folder and a public
folder in the root directory. The views
folder is where we will be putting our EJS templates and the public
folder is where will be serving our assets (stylesheets, JavaScript files, images). Now, we are ready to add some code to our server.js
file.
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// public folder to store assets
app.use(express.static(__dirname + '/public'));
// routes for app
app.get('/', function(req, res) {
res.render('pad');
});
// listen on port 8000 (for localhost) or the port defined for heroku
var port = process.env.PORT || 8000;
app.listen(port);
Here we require the Express module, set the rendering engine to EJS, and have a route for our home page. We also set the public
directory to be a static directory. Lastly, we set the server to listen on port 8000. From our home route, we will be rendering a file called pad.ejs
from the view
directory. This is a sample views/pad.ejs
file.
<!DOCTYPE html>
<html>
<head>
<title>Realtime Markdown Viewer</title>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css" rel="stylesheet">
</head>
<body class="container-fluid">
Hello World!
</body>
</html>
For styling, we added Bootstrap. Let’s start up our Node server (node server.js
) and go http://localhost:8000
in our web browser. You should see something like this:
We don’t want our view file to just say “Hello World!”, so let’s edit it. We want a text area on the left side where the user can add Markdown and we want an area on the right side where the user can see the Markdown converted into HTML markup. If a user edits the text area, we want the Markdown area to be updated automatically. Stylistically, we want both our textarea and converted Markdown area to be 100% height.
To convert text to HTML, we will be using a library called Showdown. Let’s review our updated view file.
<!DOCTYPE html>
<html>
<head>
<title>Realtime Markdown Viewer</title>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css" rel="stylesheet">
<link href="style.css" rel="stylesheet">
</head>
<body class="container-fluid">
<section class="row">
<textarea class="col-md-6 full-height" id="pad">Write your text here..</textarea>
<div class="col-md-6 full-height" id="markdown"></div>
</section>
<script src="https://cdn.rawgit.com/showdownjs/showdown/1.0.2/dist/showdown.min.js"></script>
<script src="script.js"></script>
</body>
</html>
In the view file, we added links to a CSS and a JS file. We also added the textarea (where we write the markdown) and markdown area (where we view the markdown). Notice that they have specific IDs — this will be useful for our JavaScript. Let’s add some style now to public/style.css
.
html, body, section, .full-height {
height: 100%;
}
#pad{
font-family: Menlo,Monaco,Consolas,"Courier New",monospace;
border: none;
overflow: auto;
outline: none;
resize: none;
-webkit-box-shadow: none;
-moz-box-shadow: none;
box-shadow: none;
}
#markdown {
overflow: auto;
border-left: 1px solid black;
}
For our JavaScript file (public/script.js
) we want to create a function that can convert the textarea text, convert this HTML, and place this HTML in our markdown area. We also want an event listener, which for any input change of the text area (keydown, cut, paste, etc…) it will run this converter function. Finally, we want this function to run initially on page load. Here is our public/script.js
file.
window.onload = function() {
var converter = new showdown.Converter();
var pad = document.getElementById('pad');
var markdownArea = document.getElementById('markdown');
var convertTextAreaToMarkdown = function(){
var markdownText = pad.value;
html = converter.makeHtml(markdownText);
markdownArea.innerHTML = html;
};
pad.addEventListener('input', convertTextAreaToMarkdown);
convertTextAreaToMarkdown();
};
At this point, we should have a functional app that will let us edit and view markdown right away. If we go to the homepage and add some sample markdown, we should see something like this:
Although we have a working prototype where a user can work on a Markdown document, we have to add the feature where multiple people can work on a single Markdown document. At this point, if multiple users go to the home page, they can each work on their own Markdown document, and each change they make will only be viewable to them. Also if they end up refreshing the page, all their work will be lost. Therefore, we need to add a way for multiple users to edit the same Markdown document and we also need a way to save changes.
As soon a user types on a page, we want this change to be reflected for all users. We want this Markdown app to be a real-time updating app. Basically, we are trying to add a “Google Document” type functionality where changes are seen automatically. This is not an easy problem to solve, however, there is a library that does the heavy lifting for us. ShareJS is a library that implements real-time communication.
ShareJS has one dependency though — it requires Redis. Redis is a fast data store and that is where we will be storing our Markdown files. To download and install Redis, we can follow the Redis documentation.
Once we install Redis, we need to use npm to install sharejs
and redis
, and then we should restart Node. ShareJS allows us to save the Markdown document as soon as any user makes a change to it.
- npm install --save share@0.6.3
- npm install --save redis
First let’s add the ShareJS code to our server file.
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// public folder to store assets
app.use(express.static(__dirname + '/public'));
// routes for app
app.get('/', function(req, res) {
res.render('pad');
});
app.get('/(:id)', function(req, res) {
res.render('pad');
});
// get sharejs dependencies
var sharejs = require('share');
require('redis');
// options for sharejs
var options = {
db: {type: 'redis'},
};
// attach the express server to sharejs
sharejs.server.attach(app, options);
// listen on port 8000 (for localhost) or the port defined for heroku
var port = process.env.PORT || 8000;
app.listen(port);
We require ShareJS and Redis and set some options for them. We force ShareJS to use Redis as its data store. Then, we attach our Express server to our ShareJS object. Now we need to add links to some “ShareJS” frontend JavaScript files in our view file.
<!DOCTYPE html>
<html>
<head>
<title>Realtime Markdown Viewer</title>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css" rel="stylesheet">
<link href="style.css" rel="stylesheet">
</head>
<body class="container-fluid">
<section class="row">
<textarea class="col-md-6 full-height" id="pad">Write markdown text here..</textarea>
<div class="col-md-6 full-height" id="markdown"></div>
</section>
<script src="https://cdn.rawgit.com/showdownjs/showdown/1.0.2/dist/showdown.min.js"></script>
<script src="/channel/bcsocket.js"></script>
<script src="/share/share.uncompressed.js"></script>
<script src="/share/textarea.js"></script>
<script src="script.js"></script>
</body>
</html>
The files we require are for creating a socket connection to our backend (bcsocket.js
) and sending and receiving textarea events (share.uncompressed.js
and textarea.js
). Finally, we need to actually add the code that implements ShareJS in our frontend JavaScript file (public/script.js
).
window.onload = function() {
var converter = new showdown.Converter();
var pad = document.getElementById('pad');
var markdownArea = document.getElementById('markdown');
var convertTextAreaToMarkdown = function(){
var markdownText = pad.value;
html = converter.makeHtml(markdownText);
markdownArea.innerHTML = html;
};
pad.addEventListener('input', convertTextAreaToMarkdown);
sharejs.open('home', 'text', function(error, doc) {
doc.attach_textarea(pad);
});
};
At the very bottom of this file, we open up a ShareJS connection to “home” (because we are on the home page). We then attach the textarea to the object returned by this connection. This code keeps our textarea in sync with everyone else’s textarea. So if Person A makes a change in their textarea, Person B will see that change automatically in their textarea. However, Person B’s Markdown area will not be updated right away. In fact, Person B’s Markdown area won’t be updated until they make a change to their textarea themselves. This is a problem. We will solve this by making sure a change is reflected every second if the textarea has changed.
window.onload = function() {
var converter = new showdown.Converter();
var pad = document.getElementById('pad');
var markdownArea = document.getElementById('markdown');
var previousMarkdownValue;
var convertTextAreaToMarkdown = function(){
var markdownText = pad.value;
previousMarkdownValue = markdownText;
html = converter.makeHtml(markdownText);
markdownArea.innerHTML = html;
};
var didChangeOccur = function(){
if(previousMarkdownValue != pad.value){
return true;
}
return false;
};
setInterval(function(){
if(didChangeOccur()){
convertTextAreaToMarkdown();
}
}, 1000);
pad.addEventListener('input', convertTextAreaToMarkdown);
sharejs.open('home', 'text', function(error, doc) {
doc.attach_textarea(pad);
convertTextAreaToMarkdown();
});
};
Now we have an app where multiple people can edit the home page Markdown file. However, what if we wanted to edit multiple Markdown files. What if we wanted to go to the URL like http://localhost:3000/important_doc1
and collaborate with Bob and wanted to go to http://localhost:3000/important_doc2
and collaborate with Alice? How would we go about implementing this?
First, we want to add routes to match all wildcard routes in our server file.
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// public folder to store assets
app.use(express.static(__dirname + '/public'));
// routes for app
app.get('/', function(req, res) {
res.render('pad');
});
app.get('/(:id)', function(req, res) {
res.render('pad');
});
// get sharejs dependencies
var sharejs = require('share');
require('redis');
// options for sharejs
var options = {
db: {type: 'redis'},
};
// attach the express server to sharejs
sharejs.server.attach(app, options);
// listen on port 8000 (for localhost) or the port defined for heroku
var port = process.env.PORT || 8000;
app.listen(port);
On the frontend, instead of just connecting to “home”, we want to use the correct ShareJS room. Change "home"
to document.location.pathname
.
sharejs.open(document.location.pathname, 'text', function(error, doc) {
doc.attach_textarea(pad);
convertTextAreaToMarkdown();
});
There are a couple of issues that we need to address. One, it would be nice if the home page didn’t just show random text that the last user entered. Let’s disable the real-time Markdown functionality for the home page.
// ignore if on home page
if(document.location.pathname.length > 1){
// implement share js
var documentName = document.location.pathname.substring(1);
sharejs.open(documentName, 'text', function(error, doc) {
doc.attach_textarea(pad);
convertTextAreaToMarkdown();
});
}
convertTextAreaToMarkdown();
The last issue we need to resolve is forcing our TAB
button to act as we would expect a TAB
button to act in our textarea. Currently, if we press the TAB
button in our textarea, it will make us lose focus. This is terrible. Let’s add a function for our textarea that fixes this TAB
issue. Below is a copy of our final front-end JavaScript file.
window.onload = function() {
var converter = new showdown.Converter();
var pad = document.getElementById('pad');
var markdownArea = document.getElementById('markdown');
// make the tab act like a tab
pad.addEventListener('keydown',function(e) {
if(e.keyCode === 9) { // tab was pressed
// get caret position/selection
var start = this.selectionStart;
var end = this.selectionEnd;
var target = e.target;
var value = target.value;
// set textarea value to: text before caret + tab + text after caret
target.value = value.substring(0, start)
+ "\t"
+ value.substring(end);
// put caret at right position again (add one for the tab)
this.selectionStart = this.selectionEnd = start + 1;
// prevent the focus lose
e.preventDefault();
}
});
var previousMarkdownValue;
// convert text area to markdown html
var convertTextAreaToMarkdown = function(){
var markdownText = pad.value;
previousMarkdownValue = markdownText;
html = converter.makeHtml(markdownText);
markdownArea.innerHTML = html;
};
var didChangeOccur = function(){
if(previousMarkdownValue != pad.value){
return true;
}
return false;
};
// check every second if the text area has changed
setInterval(function(){
if(didChangeOccur()){
convertTextAreaToMarkdown();
}
}, 1000);
// convert textarea on input change
pad.addEventListener('input', convertTextAreaToMarkdown);
// ignore if on home page
if(document.location.pathname.length > 1){
// implement share js
var documentName = document.location.pathname.substring(1);
sharejs.open(documentName, 'text', function(error, doc) {
doc.attach_textarea(pad);
convertTextAreaToMarkdown();
});
}
// convert on page load
convertTextAreaToMarkdown();
};
At this point, we have a fully functional real-time Markdown editor. Now, how do we get it up and running on Heroku? First, we need to make sure we have an account with Heroku. Then we will need to install the Heroku toolbelt.
In the command line, we will type heroku login
to log in to our Heroku account. Heroku uses Git to push so we need to make sure we have created a repo and committed all our files to a local repo. To create a Heroku app, type heroku create
from the command line.
To use Heroku we will need to change how we configure Redis. We will be using Redis to Go to add Redis to our Heroku app. From the command line type heroku addons:create redistogo
.
Also, we will need to edit our server.js
to handle this new configuration. Here is our final server.js
file.
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// public folder to store assets
app.use(express.static(__dirname + '/public'));
// routes for app
app.get('/', function(req, res) {
res.render('pad');
});
app.get('/(:id)', function(req, res) {
res.render('pad');
});
// get sharejs dependencies
var sharejs = require('share');
// set up redis server
var redisClient;
console.log(process.env.REDISTOGO_URL);
if (process.env.REDISTOGO_URL) {
var rtg = require("url").parse(process.env.REDISTOGO_URL);
redisClient = require("redis").createClient(rtg.port, rtg.hostname);
redisClient.auth(rtg.auth.split(":")[1]);
} else {
redisClient = require("redis").createClient();
}
// options for sharejs
var options = {
db: {type: 'redis', client: redisClient}
};
// attach the express server to sharejs
sharejs.server.attach(app, options);
// listen on port 8000 (for localhost) or the port defined for heroku
var port = process.env.PORT || 8000;
app.listen(port);
Lastly, we need to tell our Heroku app that we are using Node and tell it which file Node uses to start up. Add a file called Procfile
in the root directory.
web: node server.js
Now after we commit our changes into Git, we are ready to push our app to Heroku. Type git push heroku master
to push our repo to Heroku. We should see Heroku returning a bunch of statuses as it is building the application.
Note: We can always end up renaming our app. At first, Heroku will probably give us a ridiculous-sounding name.
Type heroku open
to open the app! The first time may take a bit to load but it should be all working. Remember to go to something like our_application_url/document
to edit a new Markdown document.
Congratulations! We have a real-time Markdown application that we can use for writing markdown and to collaborate with our friends.
You can view the entire code repo here.
]]>Frontend development is changing day by day and we have to learn a lot more stuff. When we start learning a new framework or library, the first thing that is recommended is to build a todo list that helps in doing all CRUD functions. But there is no solid backend/library available to make use of it to build a todo list.
Simulate a backend server and a REST API with a simple JSON file.
To overcome that problem json-server
came into the picture. With it, we can make a fake REST API. I have used it in my app and thought of sharing it with the frontend community.
JSON Server is an npm package that you can create a REST JSON webservice. All we need is a JSON file and that will be used as our backend REST.
You can either install it locally for a specific project or globally. I will go with locally.
- npm install -D json-server
Above is a single-line command to install the json-server
.
-D Package will appear in your devDependencies.
I am not going to explain that much here. If you want to learn more about that go through the docs for npm install.
Check JSON Server version using json-server -v
.
As per the standard convention, I am going to name the file db.json
, you can name it as per your needs.
{
"Todos": [
{
"id": 1,
"todo": "Check Todo"
},
{
"id": 2,
"todo": "New Todo"
}
]
}
For simplicity, I have included two elements into the Todos
array. You can add more also.
- json-server --watch db.json
Your JSON Server will be running on port 3000.
Now that we have our server and API running, we can test it and access it with a tool like POSTman or Insomnia.
These are REST clients that help us run HTTP calls.
Moving onto the CRUD operations. This is how we can access our data using RESTful routes.
Routing URLs
[GET] http://localhost:3000/Todos
[POST] http://localhost:3000/Todos post params:!
[PUT] http://localhost:3000/Todos post params:!
[DELETE] http://localhost:3000/Todos/id
Now we can see that db.json
file can make REST webservice. Also, we can make custom URIs with a mapping file. I will cover those areas in the next article.
I hope this article will remove each and every frontend developer’s pain (banging the head) for a backend server to test with. Further, you can check out the code in my GitHub repo and also refer to the official json-server docs for more operations
If you have any queries, let me know in the comments.
]]>When using Laravel’s Eloquent to get data back from our database, sometimes we don’t want to get certain information out of that call.
There are a few scenarios where this would be wanted. For instance, we don’t want to get a password (hopefully hashed) out of our database and display that to users.
Hiding an attribute is a simple process. All the work is done when defining your Eloquent model.
<?php
class User extends Eloquent {
protected $hidden = array('password', 'token');
}
Just like that, you won’t have those fields come through when accessing your Eloquent models.
<?php
Route::get('users', function() {
return User::all()->toArray();
});
Now the above code won’t show off our secret password or token information. You probably wouldn’t do the above code anyway, but it is just a good precaution to make sure that passwords don’t accidentally get spit out to users in any way.
Continue your learning with A Guide to Using Eloquent ORM in Laravel.
]]>Route middleware is an extremely powerful tool in Node.js and Express. As an example of how powerful Express’s route middleware can be, the awesome Passport.js that handles authentication is a route middleware tool.
Also the other big players you usually use like bodyParser
and methodOverride
are also considered route middleware.
We’ll be looking at a quick way to make sure your users are authenticated before they visit parts of your application.
app.get('/hello', function(req, res) {
res.send('look at me!');
});
...
function isAuthenticated(req, res, next) {
// do any checks you want to in here
// CHECK THE USER STORED IN SESSION FOR A CUSTOM VARIABLE
// you can do this however you want with whatever variables you set up
if (req.user.authenticated)
return next();
// IF A USER ISN'T LOGGED IN, THEN REDIRECT THEM SOMEWHERE
res.redirect('/');
}
Now that we have our function to check if our user is logged in or authenticated, we’ll just apply it to our route.
app.get('/hello', isAuthenticated, function(req, res) {
res.send('look at me!');
});
While this is a simple example, you can see how you can create any function to do checks to see if your user is authenticated, a certain administrator level, or anything else your app needs.
]]>What is Vue.js? How is it different from jQuery? Should I stop using jQuery if I learned Vue.js? Can you use it outside Laravel? If you are a beginner or you just started learning Vue.js you are probably asking yourself the exact same questions or probably confused and wondering what does it do and what are its use cases. If this is you, then this article will help you get over that.
After reading this article you should have an idea about this trending framework when to use it and decide if you will abandon jQuery for it.
jQuery (write less, do more) is a fast, small, and feature-rich JavaScript library that works across a multitude of browsers and it was created to make writing vanilla JavaScript easier. jQuery allows for DOM/CSS manipulation, event handling, animation, and making AJAX requests.
jQuery can be used for multiple things. A lot of libraries and plugins require it because you can do simple things like alter an input
’s value or get a div
’s content to create amazing slideshows/galleries and wonderful animations.
When you are comfortable writing jQuery code you can absolutely write all your JavaScript using jQuery. Here are few examples to demonstrate how easy jQuery is:
$('#input-id').val();
Note: It doesn’t have to be the id
of the element, you can use all CSS selectors that you are used to: tag name, class name, attribute, first-child, last-child.
$('#element-id').addClass('some-class');
$.get('http://example.com/api/endpoint', function(data){
console.log(data);
});
You can clearly see how easy it is to manipulate the DOM or make AJAX calls using jQuery compared to how you usually would do it using vanilla JavaScript.
It’s so easy that many developers forgot how to write simple code with vanilla JavaScript.
You can use jQuery by referencing the CDN like this:
<script src="https://code.jquery.com/jquery-3.2.1.min.js"></script>
Or you can install it via npm:
- npm install jquery
Unlike jQuery, Vue.js is an MVC framework that is very much inspired by Angular. In fact, its creator Evan You started this project after working at Google on Angular. He decided to extract the cool parts about Angular and create a framework that is really lightweight and much easier to learn and use.
Vue was first released in February 2014 and it has gained popularity in the Laravel world. As I am writing this article Vue has and 4,933,779 NPM downloads 65,422 Github Stars.
Vue is suitable for small projects where you just want to add a little bit of reactivity, submit a form with AJAX, show the user a modal, display the value of an input as the user is typing, or many other similarly straightforward things. It’s scalable and also fantastic for a huge project. This is why it’s referred to as the progressive framework. You can find code samples for these examples in the official documentation in different languages:
Check the docs for more examples.
Vue is also perfectly designed for large single-page applications thanks to its Router and Vuex core components. We will cover a lot more advanced parts (Components, Filters, Router, Events, Vuex…) of the framework later on here at Scotch.io but if you are the type to learn from reading other people’s code then I highly recommend going through this example: HackerNews Clone.
You can use Vue by referencing the CDN like this:
<script src="https://unpkg.com/vue"></script>
Or you can install it via npm:
- npm install vue
In this chapter we will go through several examples of how you can accomplish different tasks with jQuery and Vue.js:
jQuery: https://jsfiddle.net/4x445r2r/
<button id="button">Click me!</button>
(function() {
$('#button').click(function() {
alert('Clicked!');
});
})();
Vue: https://jsfiddle.net/jwfqtutc/
<div id="app">
<button @click="doSomething">Click me!</button>
</div>
new Vue({
el: '#app',
methods: {
doSomething() {
alert('Clicked!');
}
}
});
jQuery: https://jsfiddle.net/5zdcLdLy/
<input id="input" type="text" placeholder="Enter your name">
(function() {
$('#input').change(function() {
alert('Hello '+ $(this).val());
});
})();
Vue: https://jsfiddle.net/as65e4nt/
<div id="app">
<input @change="doSomething" v-model="name" type="text" placeholder="Enter your name">
</div>
new Vue({
el: '#app',
data: {
name: ''
},
methods: {
doSomething() {
alert('Hello '+ this.name);
}
}
});
jQuery: https://jsfiddle.net/o65nvke2/
<div id="content">
Lorem ipsum dolor sit amet, consectetur adipisicing elit. Amet, modi. Similique amet aliquam magni obcaecati placeat, iusto ipsum enim, perferendis earum modi debitis praesentium, consequatur dolor soluta deserunt. Saepe, laborum.
</div>
(function() {
var className = 'red-text';
$('#content').addClass(className);
})();
Vue: https://jsfiddle.net/a203pyqf/
<div id="app">
<div id="content" :class="className">
Lorem ipsum dolor sit amet, consectetur adipisicing elit. Amet, modi. Similique amet aliquam magni obcaecati placeat, iusto ipsum enim, perferendis earum modi debitis praesentium, consequatur dolor soluta deserunt. Saepe, laborum.
</div>
</div>
new Vue({
el: '#app',
data: {
className: 'red-text'
}
});
jQuery: https://jsfiddle.net/ccLffhot
<div id="content"></div>
<input type="text" id="text" placeholder="Enter your text">
(function() {
$('#text').keyup(function() {
$('#content').html($(this).val());
});
})();
Vue: https://jsfiddle.net/gjLag10s/
<div id="app">
<div v-html="content"></div>
<input type="text" placeholder="Enter your text" v-model="content">
</div>
new Vue({
el: '#app',
data: {
content: ''
}
});
jQuery: https://jsfiddle.net/4LcL5pco/
<div id="content">
Alert!
</div>
<button id="button">Toggle</button>
(function() {
$('#button').click(function() {
$('#content').toggle();
});
})();
Vue: https://jsfiddle.net/a8xoaoqy/
<div id="app">
<div id="content" v-if="visible">
Alert!
</div>
<button @click="visible = !visible">Toggle</button>
</div>
new Vue({
el: '#app',
data: {
visible: true
}
});
jQuery: https://jsfiddle.net/9f4pcakt/
<span>Social Networks:</span>
<select id="networks"></select>
(function() {
var socialNetworks = ['Facebook', 'Twitter', 'YouTube', 'Instagram', 'LinkedIn'];
var list;
$.each(socialNetworks, function (index, value) {
list += `<option value="${index}">${value}</option>`;
});
$('#networks').html(list);
})();
Vue: https://jsfiddle.net/gktr062m/
<div id="app">
<span>Social Networks:</span>
<select id="networks">
<option v-for="(network, index) in socialNetworks" :value="index">{{ network }}</option>
</select>
</div>
new Vue({
el: '#app',
data: {
socialNetworks: ['Facebook', 'Twitter', 'YouTube', 'Instagram', 'LinkedIn']
}
});
jQuery: https://jsfiddle.net/t3qef00y/
<span>List of users:</span>
<ul id="users"></ul>
(function() {
var list = '';
$.get('https://example.com/api/users', function(response) {
$.each(response.data, function (index, user) {
list += `<li>${user.first_name}</li>`;
});
$('#users').html(list);
});
})();
Vue: https://jsfiddle.net/gbjthb3q/
You cannot make AJAX calls with Vue itself, but the team released a package dedicated to that: GitHub - pagekit/vue-resource
: The HTTP client for Vue.js
<div id="app">
<span>List of users:</span>
<ul id="users">
<li v-for="user in users">{{ user.first_name }}</li>
</ul>
</div>
new Vue({
el: '#app',
data: {
users: null
},
mounted: function() {
this.$http.get('https://example.com/api/users').then(response => {
this.users = response.body.data;
});
}
});
Now that you have read this article, you now know the difference between jQuery and Vue, the benefits that come with each one, and when to use each. I personally still use jQuery when I feel like it is enough for the project I am working on and I use Vue for more complexity and reactivity. In the end, it is all a matter of preferences and which tools you are more comfortable with.
]]>Laravel and Angular have both become very well renowned tools in the web development world lately. Laravel for the great things it brings to the PHP community and Angular for the amazing frontend tools and its simplicity. Combining these two great frameworks only seems like the logical next step.
For our use cases, we will be using Laravel as the RESTful API backend and Angular as the frontend to create a very simple single-page comment application.
This will be a simple example to show off how to get started using these two technologies so don’t hope for any extra database stuff on how to handle sub-comments or anything like that.
This will be a simple single-page comment application:
Overall, these are very simple concepts. Our focus will be to see the intricacies of how Laravel and Angular can work together.
Go ahead and get your Laravel setup ready. We’ll be doing some basic things to get our backend to do CRUD on comments:
We will need a simple structure for our comments. We just need text
and author
. Let’s create our Laravel migration to create our comments.
Let’s run the artisan command that will create our comments migration so that we can create the table in our database:
- php artisan migrate:make create_comments_table --create=comments
We’ll use the Laravel Schema Builder to create the text
and author
fields that we need. Laravel will also create the id
column and the timestamps
so that we know how long ago the comment was made. Here is the code for the comments table:
...
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('comments', function(Blueprint $table)
{
$table->increments('id');
$table->string('text');
$table->string('author');
$table->timestamps();
});
}
...
Make sure you go adjust your database settings in app/config/database.php
with the right credentials. Now we will run the migration so that we create this table with the columns that we need:
- php artisan migrate
With our table made, let’s create an Eloquent model so that we can interact with it.
We will be using Laravel Eloquent models to interact with our database. This will be very easy to do. Let’s create a model: app/models/Comment.php
.
<?php
class Comment extends Eloquent { // let eloquent know that these attributes will be available for mass assignment protected $fillable = array('author', 'text'); }
We now have our new table and model. Let’s fill it with some sample data using Laravel Seeding.
We will need a few comments so that we can test a few things. Let’s create a seed file and fill our database with 3 sample comments.
Create a file: app/database/seeds/CommentTableSeeder.php
and fill it with this code.
<?php
class CommentTableSeeder extends Seeder {
public function run()
{
DB::table('comments')->delete();
Comment::create(array(
'author' => 'Chris Sevilleja',
'text' => 'Comment by Chris.'
));
Comment::create(array(
'author' => 'Nick Cerminara',
'text' => 'Comment by Nick.'
));
Comment::create(array(
'author' => 'Holly Lloyd',
'text' => 'Comment by Holly.'
));
}
}
To call this Seeder file, let’s open app/database/seeds/DatabaseSeeder.php
and add the following:
...
/**
* Run the database seeds.
*
* @return void
*/
public function run()
{
Eloquent::unguard();
$this->call('CommentTableSeeder');
$this->command->info('Comment table seeded.');
}
...
Now let’s run our seeders using artisan.
- php artisan db:seed
Now we have a database with a comment table, an Eloquent model, and samples in our database. Not bad for a day’s work… but we’re not even close to done yet.
app/controllers/CommentController.php
We will use Laravel’s resource controllers to handle our API functions for comments. Since we’ll be using Angular to display a resource and show create and update forms, we’ll create a resource controller with artisan without the create
or edit
functions.
Let’s create our controller using artisan.
- php artisan controller:make CommentController --only=index,store,destroy
For our demo app, we’ll only be using these three functions in our resource controller. To expand on this you’d want to include all the functions like update, show, update for a more fully-fledged app.
Now we’ve created our controller. We don’t need the create
and edit
functions because Angular will be handling showing those forms, not Laravel. Laravel is just responsible for sending data back to our frontend. We also took out the update
function for this demo just because we want to keep things simple. We’ll handle creating, showing, and deleting comments.
To send data back, we will want to send all our data back as JSON. Let’s go through our newly created controller and fill out our functions accordingly.
<?php
class CommentController extends BaseController {
/**
* Send back all comments as JSON
*
* @return Response
*/
public function index()
{
return Response::json(Comment::get());
}
/**
* Store a newly created resource in storage.
*
* @return Response
*/
public function store()
{
Comment::create(array(
'author' => Input::get('author'),
'text' => Input::get('text')
));
return Response::json(array('success' => true));
}
/**
* Remove the specified resource from storage.
*
* @param int $id
* @return Response
*/
public function destroy($id)
{
Comment::destroy($id);
return Response::json(array('success' => true));
}
}
You can see how easy it is to handle CRUD with Laravel and Eloquent. It’s incredibly simple to handle all the functions that we need.
With our controller ready to go, the last thing we need to do for our backend is routing.
Extra Reading: Simple Laravel CRUD with Resource Controllers
With our database ready to rock and roll, let’s handle the routes of our Laravel application. We will need routes to send users to the Angular frontend since that will have its own routing. We will also need routes for our backend API so people can access our comment data.
Let’s create the Angular pointing routes. We will need one for the home page and a catch-all route to send users to Angular. This ensures that any way a user accesses our site, they will be routed to the Angular frontend.
We’ll be prefixing our API routes with… (drumroll please)… api
. This way, if somebody wants to get all comments, they will use the URL: http://example.com/api/comments
. This just makes sense moving forward and is some basic API creation good tactics.
<?php
// HOME PAGE ===================================
// we dont need to use Laravel Blade
// we will return a PHP file that will hold all of our Angular content
// see the "Where to Place Angular Files" below to see ideas on how to structure your app return
Route::get('/', function() {
View::make('index'); // will return app/views/index.php
});
// API ROUTES ==================================
Route::group(array('prefix' => 'api'), function() {
// since we will be using this just for CRUD, we won't need create and edit
// Angular will handle both of those forms
// this ensures that a user can't access api/create or api/edit when there's nothing there
Route::resource('comments', 'CommentController',
array('only' => array('index', 'store', 'destroy')));
});
// CATCH ALL ROUTE =============================
// all routes that are not home or api will be redirected to the frontend
// this allows angular to route them
App::missing(function($exception) {
return View::make('index');
});
We now have our routes to handle the 3 main things our Laravel backend needs to do.
Handling Catch-All Routes: In Laravel, you can do this a few ways. Usually, it isn’t ideal to do the above code and have a catch-all for your entire application. The alternative is that you can use Laravel Controller Missing Methods to catch routes.
Testing All Our Routes Let’s make sure we have all the routes we need. We’ll use artisan and see all our routes:
- php artisan routes
This command will let us see our routes and sort of a top-down view of our application.
We can see the HTTP verb and the route used to get all comments, get a single comment, create a comment, and destroy a comment. On top of those API routes, we can also see how a user gets routed to our Angular application by the home page route.
Finally! Our Laravel API backend is done. We have done so much and yet, there’s still so much to do. We have set up our database and seeded it, created our models and controllers, and created our routes. Let’s move on to the frontend Angular work.
I’ve seen this question asked a lot. Where exactly should I be putting Angular files and how does Laravel and Angular work together. We did an article on Using Laravel Blade with AngularJS. This article works under the assumption that we aren’t even going to use Blade.
To let Angular handle the frontend, we will need Laravel to pass our user to our index.php
file. We can place this in a few different places. By default, when you use:
Route::get('/', function() {
return View::make('index');
});
This will return app/views/index.php
. Laravel will by default look in the app/views
folder.
Some people may want to keep Angular files completely separate from Laravel files. They will want their entire application to be housed inside of the public
folder. To do this is simple: just change the default View location to the public folder. This can be done in the app/config/view.php
file.
...
// make laravel look in public/views for view files
'paths' => array(__DIR__.'/../../public/views'),
...
Now return View::make('index')
will look for public/views/index.php
. It is all preference on how you’d like to structure your app. Some people see it as a benefit to have the entire Angular application in the public folder so that it is easier to handle routing and if it is needed in the future, to completely separate the backend RESTful API and the Angular frontend.
For Angular routing, then your partial files will be placed in the public folder, but that’s out of the scope of this article. For more information on that kind of single-page Angular routing, check out Single Page Angular Application Routing.
Let’s assume we left everything default and our main view file is in our app/views
folder and move forward.
Routing with Laravel and Angular There are a lot of questions about having routing with Laravel and Angular and if they conflict. Laravel will handle the main routing for your application. Angular routing will only happen when Laravel routes our user to the main Angular route (index.php
) in this case. This is why we use a Laravel catch-all route. Laravel will handle the API routes and anything it doesn’t know how to route will be sent to Angular. You can then set up all the routing for your Angular application to handle showing different views.
Everything for our Angular application will be handled in the public
folder. This lets us keep a good separation of the backend in the app
folder.
Let’s look at the application structure we will have in our public
folder. We’ve created our Angular application to be modular since that is best practice. Now our separated parts of our application will be easy to test and work with.
- public/
----- js/
---------- controllers/ // where we will put our angular controllers
--------------- mainCtrl.js
---------- services/ // angular services
--------------- commentService.js
---------- app.js
public/js/services/commentService.js
Our Angular service is going to be the primary place where we will have our HTTP calls to the Laravel API. It is pretty straightforward and we use the Angular $http service.
angular.module('commentService', [])
.factory('Comment', function($http) {
return {
// get all the comments
get : function() {
return $http.get('/api/comments');
},
// save a comment (pass in comment data)
save : function(commentData) {
return $http({
method: 'POST',
url: '/api/comments',
headers: { 'Content-Type' : 'application/x-www-form-urlencoded' },
data: $.param(commentData)
});
},
// destroy a comment
destroy : function(id) {
return $http.delete('/api/comments/' + id);
}
}
});
This is our Angular service with 3 different functions. These are the only functions we need since they will correspond to the API routes we made in our Laravel routes.
We will be returning the promise object from our service. These will be dealt with in our controllers. The naming convention here also stays the same as the Laravel controller that we have.
With our Angular Service done, let’s go into our controller and use it.
public/js/controllers/mainCtrl.js
The controller is where we will have most of the functionality for our application. This is where we will create functions to handle the submit forms and deleting on our view.
angular.module('mainCtrl', [])
// inject the Comment service into our controller
.controller('mainController', function($scope, $http, Comment) {
// object to hold all the data for the new comment form
$scope.commentData = {};
// loading variable to show the spinning loading icon
$scope.loading = true;
// get all the comments first and bind it to the $scope.comments object
// use the function we created in our service
// GET ALL COMMENTS ==============
Comment.get()
.success(function(data) {
$scope.comments = data;
$scope.loading = false;
});
// function to handle submitting the form
// SAVE A COMMENT ================
$scope.submitComment = function() {
$scope.loading = true;
// save the comment. pass in comment data from the form
// use the function we created in our service
Comment.save($scope.commentData)
.success(function(data) {
// if successful, we'll need to refresh the comment list
Comment.get()
.success(function(getData) {
$scope.comments = getData;
$scope.loading = false;
});
})
.error(function(data) {
console.log(data);
});
};
// function to handle deleting a comment
// DELETE A COMMENT ====================================================
$scope.deleteComment = function(id) {
$scope.loading = true;
// use the function we created in our service
Comment.destroy(id)
.success(function(data) {
// if successful, we'll need to refresh the comment list
Comment.get()
.success(function(getData) {
$scope.comments = getData;
$scope.loading = false;
});
});
};
});
As you can see in our controller, we have injected our Comment
service and use it for the main functions: get
, save
, and delete
. Using a service like this helps to not pollute our controller with $http
gets and puts.
public/js/app.js
On the Angular side of things, we have created our service and our controller. Now let’s link everything together so that we can apply it to our application using ng-app
and ng-controller
.
This will be the code to create our Angular application. We will inject the service and controller into. This is a best practice since it keeps our application modular and each different part can be testable and extendable.
var commentApp = angular.module('commentApp', ['mainCtrl', 'commentService']);
That’s it! Not much to it. Now we’ll actually get to our view where we can see how all these Angular parts work together.
app/views/index.php
So far, after everything we’ve done up to this point, we still won’t be able to see anything in our browser. We will need to define our view file since Laravel in our home route and our catch-all route returns return View::make('index');
.
Let’s go ahead and create that view now. We will be using all the Angular parts that we’ve created. The main parts that we’ve created from Angular that we’ll use in index.php
are:
body
tagng-submit
loading
. If it is set to true, we’ll show a loading icon and hide the commentsNow let’s get to the actual code for our view. We’ll comment out the main important parts so we can see how everything works together.
<!doctype html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Laravel and Angular Comment System</title>
<!-- CSS -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.0/css/bootstrap.min.css"> <!-- load bootstrap via cdn -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css"> <!-- load fontawesome -->
<style>
body { padding-top:30px; }
form { padding-bottom:20px; }
.comment { padding-bottom:20px; }
</style>
<!-- JS -->
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.8/angular.min.js"></script> <!-- load angular -->
<!-- ANGULAR -->
<!-- all angular resources will be loaded from the /public folder -->
<script src="js/controllers/mainCtrl.js"></script> <!-- load our controller -->
<script src="js/services/commentService.js"></script> <!-- load our service -->
<script src="js/app.js"></script> <!-- load our application -->
</head>
<!-- declare our angular app and controller -->
<body class="container" ng-app="commentApp" ng-controller="mainController"> <div class="col-md-8 col-md-offset-2">
<!-- PAGE TITLE =============================================== -->
<div class="page-header">
<h2>Laravel and Angular Single Page Application</h2>
<h4>Commenting System</h4>
</div>
<!-- NEW COMMENT FORM =============================================== -->
<form ng-submit="submitComment()"> <!-- ng-submit will disable the default form action and use our function -->
<!-- AUTHOR -->
<div class="form-group">
<input type="text" class="form-control input-sm" name="author" ng-model="commentData.author" placeholder="Name">
</div>
<!-- COMMENT TEXT -->
<div class="form-group">
<input type="text" class="form-control input-lg" name="comment" ng-model="commentData.text" placeholder="Say what you have to say">
</div>
<!-- SUBMIT BUTTON -->
<div class="form-group text-right">
<button type="submit" class="btn btn-primary btn-lg">Submit</button>
</div>
</form>
<!-- LOADING ICON =============================================== -->
<!-- show loading icon if the loading variable is set to true -->
<p class="text-center" ng-show="loading"><span class="fa fa-meh-o fa-5x fa-spin"></span></p>
<!-- THE COMMENTS =============================================== -->
<!-- hide these comments if the loading variable is true -->
<div class="comment" ng-hide="loading" ng-repeat="comment in comments">
<h3>Comment #{{ comment.id }} <small>by {{ comment.author }}</h3>
<p>{{ comment.text }}</p>
<p><a href="#" ng-click="deleteComment(comment.id)" class="text-muted">Delete</a></p>
</div>
</div>
</body>
</html>
Now we finally have our view that brings all of the parts we created together. You can go ahead and play around with the application. All the parts should fit together nicely and creating and deleting comments should be done without a page refresh.
Make sure you take a look at the GitHub repo to test the application. Here are some quick instructions to get you going.
git clone git@github.com:scotch-io/laravel-angular-comment-app
composer install --prefer-dist
app/config/database.php
php artisan migrate
php artisan db:seed
Hopefully, this tutorial gives a good overview of how to start an application using Laravel and Angular. You can bring this further and create a full application that can handle multiple API calls on the Laravel side, and even create your own Angular routing for multiple pages.
Sound off in the comments if you have any questions or would like to see a specific use case. We can also expand on this demo and start adding different things like editing a comment, user profiles, whatever.
]]>ScrollMagic is a jQuery plugin which lets you use the scrollbar like a playback scrub control. Using this, you can build some extremely beautiful landing pages and websites. Normally, we wouldn’t do a tutorial on using a single jQuery plugin, but scrollMagic does a lot and has quickly become one of my favorite plugins.
In this article, I’ll cover my general opinion on scroll plugins, how to get started with ScrollMagic, and some basic and over-the-top creative demos.
I’m not a fan of hijacking a user’s scroll period. I personally believe it’s way too easy to ruin a user’s experience and it makes it difficult to quickly navigate to specific content. It takes a lot for me to consider using a jQuery plugin that heavily affects normal scroll behavior. ScrollMagic doesn’t really hijack a user’s scroll despite its name alluding to the idea that it would. It simply just triggers a bunch of events during a user’s scroll. For example, compare these two sites:
Notice that with the Google Cardboard site you can quickly navigate up and down, but with fullPage.js you’re actually restricted and delayed on your scrolling. FullPage.js is nevertheless a great and impressive plugin, it’s just not user experience I like to create.
Lastly, if you check out ScrollMagic’s demo page you’ll see a ton of crazy animations. The demo is definitely over the top and doesn’t really do justice for the advantages of using ScrollMagic in simpler designs. I hope after reading this article though that you understand and enjoy the benefits as much as I do.
Here’s a little sample of one of the things we’ll be able to build:
See the Pen ScrollMagic Demos - Class Toggles by Nicholas Cerminara (@ncerminara) on CodePen.
To get started you’ll need a few dependencies.
ScrollMagic requires jQuery. You’ll need to include to be able to even use ScrollMagic. I’m going to include the latest jQuery before it dropped Internet Explorer 8 support (jQuery 2.x+) despite ScrollMagic only supporting Internet Explorer 9 and above.
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
ScrollMagic uses the GreenSock Animation Platform (GSAP) for doing animations. Technically, the GreenSock platform is completely optional, but it makes total sense to use it. GSOP is nice because it has its own little framework with its own dependencies and plugins. If performance is a huge factor for you, you can pick and choose only exactly what you need. However, we’re going to use the whole library to take advantage of all its cool features.
<script src="//cdnjs.cloudflare.com/ajax/libs/gsap/1.14.2/TweenMax.min.js"></script>
Next, you’ll need to include ScrollMagic. ScrollMagic also comes with a nice but separate debugging library. I’ll include it for the demos, but on production environments, there’s no need to include it.
<script src="jquery.scrollmagic.min.js"></script>
<script src="jquery.scrollmagic.debug.js"></script>
And here it is all together with the full HTML and references to Bootstrap for CSS:
<!doctype html>
<!--[if lt IE 7 ]><html id="ie6" class="ie ie-old ie-super-old"><![endif]-->
<!--[if IE 7 ]> <html id="ie7" class="ie ie-old ie-super-old"><![endif]-->
<!--[if IE 8 ]> <html id="ie8" class="ie ie-old ie-super-old"><![endif]-->
<!--[if IE 9 ]> <html id="ie9" class="ie ie-old"><![endif]-->
<!--[if gt IE 9]><!--><html><!--<![endif]-->
<head>
<!-- Meta -->
<meta charset="utf-8">
<title>Scotch Magic ♥</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta content="yes" name="apple-mobile-web-app-capable">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<!-- Favicons -->
<link rel="shortcut icon" sizes="16x16 24x24 32x32 48x48 64x64" href="/wp-content/favicon.ico">
<!-- Styles -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.3.0/css/bootstrap.min.css">
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css">
<!--[if lt IE 9]>
<script src="//oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="//oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
<link rel="stylesheet" href="style.css"><!-- Reference to your stylesheet -->
</head>
<body>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/gsap/1.14.2/TweenMax.min.js"></script>
<script src="jquery.scrollmagic.min.js"></script>
<script src="jquery.scrollmagic.debug.js"></script><!-- Dev only -->
<script src="power.js"></script><!-- scripts.js, main.js, etc. -->
</body>
</html>
Typically when you initiate a jQuery plugin you just pass a bunch of options and call it a day. Sometimes a plugin will have advanced features like a callbacks API or the ability to return the entire plugin as an object with some public functions so you can get real custom with it.
ScrollMagic is a little bit different than this. We’re going to initiate a ScrollMagic Controller, create a bunch of animation objects, create some Scene (where the animation happens) objects, combine the animation and scene objects, then pass it all back to the main ScrollMagic Controller. So our general steps will be:
It’s nothing too crazy as it’s your typical JavaScript stuff, but understanding the underlying structure of how it all plays together will help you move forward with it. It’s a little bit more involved than your standard jQuery plug and chug plugins.
Now, all that being said, this is how to initiate the ScrollMagic Controller:
$(function() {
// Init Controller
var scrollMagicController = new ScrollMagic();
});
Now let’s create the two most basic examples that ScrollMagic does for us.
All this does is trigger an animation. We’ll do two things to get this working. First, create the Animation on the element we want to animate. Then, second, we’ll create the Scene which is going to trigger the animation when it is scrolled into view. So, let’s go ahead and create that first animation (we’ll cover these more in-depth further in the article):
// Create Animation for 0.5s
var tween = TweenMax.to('#animation', 0.5, {
backgroundColor: 'rgb(255, 39, 46)',
scale: 7,
rotation: 360
});
Pretty simple! This will add those CSS properties to the element with the ID of #animation
. However, we need to control when those animations happen. ScrollMagic will make it easy to bind the animation to certain scroll events by creating Scenes. Here’s the next piece of code:
// Create the Scene and trigger when visible with ScrollMagic
var scene = new ScrollScene({
triggerElement: '#scene',
offset: 150 /* offset the trigger 150px below #scene's top */
})
.setTween(tween)
.addTo(scrollMagicController);
We create the Scene as an object to be triggered later, then we pass which animations we want to that Scene, and, finally, we pass it all back to the ScrollMagicController
to be handled. Here’s a stripped and naked example to help explain:
See the Pen ScrollMagic Demos - Basic Example by Nicholas Cerminara (@ncerminara) on CodePen.
The last example only triggers the animation at the specified Scene trigger point. ScrollMagic can bind your animation to the scroll event. This acts as a rewind and fast-forward scrubber for your animation. Here’s the code for doing that:
// Duration ignored / replaced by scene duration now
var tween = TweenMax.to('#animation', 0.5, {
backgroundColor: 'rgb(255, 39, 46)',
scale: 5,
rotation: 360
});
var scene = new ScrollScene({
triggerElement: '#scene',
duration: 300 /* How many pixels to scroll / animate */
})
.setTween(tween)
.addTo(scrollMagicController);
You should immediately see that the only difference that matters between the two examples is that the duration
property is added to the scene. This will be how many pixels you want the animation to be on scroll. Here’s an example to visualize the difference between the two methods:
See the Pen ScrollMagic Demos - Animations Binded to Scroll by Nicholas Cerminara (@ncerminara) on CodePen.
There’s a ton of options for doing animations. I’ll cover some of the more various ones, but, first, let’s do the most common one - “tweening” using the GreenSock Animation Platform.
Tweening is what the GSAP calls their animations. We’re specifically using their TweenMax library. TweenMax is awesome because it encompasses all their various plugins and additions into one. This gives us some cross-browser stuff, makes the browser use CSS3 animations first, is extremely performant, and lets you create complex animations and keyframes with ease. Alternatively, you can work piecemeal and pick exactly which components you want with TweenLite and its plugins.
This lets us create our most standard animations. For example if you want an element’s background color to go from it’s default to red. Here’s an example:
// TweenMax.to(selectorOfElementYouWantToAnimate, DurationOfAnimation, AnimationProperties);
var tween = TweenMax.to('#first-animation', 0.5, {backgroundColor: 'red'});
You can get as infinitely creative as you want with this. For example, the following tween will make the background red, it grow 5 times in size, and do a full spin rotation using CSS3 Transforms.
var tween = TweenMax.to('#first-animation', 0.5, {
backgroundColor: 'red',
scale: 5,
rotation: 360
});
You can do pretty much anything you would be able to do with CSS3 animations - colors, transforms, etc. Here’s the official resource so you can reference it for syntax. You can view the examples directly above to see the TweenMax.to()
function in action.
This works exactly the opposite of TweenMax.to()
. It will animate to the default styles from the specified animation options. Here’s some example code:
var tween = TweenMax.from('#animation', 0.5, {
backgroundColor: 'rgb(255, 39, 46)',
scale: 5,
rotation: 360
});
Here’s an example from one of the basic demos using the from()
function instead:
See the Pen ScrollMagic Demos - Animation Trigger by Nicholas Cerminara (@ncerminara) on CodePen.
This function is exactly what it sounds like. You’ll specify two animation properties for it to animate from one and then to the other. Hopefully, you’re wondering why you can’t just use the to()
function and set the start styles with CSS. Well, you can and that’s totally okay. The function fromTo
however introduces a bunch of other options like yoyo
and repeat
. So you use those to create keyframe animations when the scroll event is triggered. Check out the code below:
var tween = TweenMax.fromTo('#animation', 0.5,
{
backgroundColor: 'rgb(255, 39, 46)',
scale: 5,
left: -400
},
{
scale: 1,
left: 400,
rotation: 360,
repeat: -1, /* Aka an infinite amount of repeats */
yoyo: true /* Make it go back and forth or not */
}
);
See the Pen TweenMax.fromTo() with Repeat and Yoyo Turned On by Nicholas Cerminara (@ncerminara) on CodePen.
See the Pen TweenMax.fromTo() with Repeat and Yoyo Turned Off by Nicholas Cerminara (@ncerminara) on CodePen.
With both of these examples, if you remove the Scene’s duration, there will be no endpoint for the animation to stop.
You can easily have multiple elements have the same animation and different start times all within the same Scene. This is called staggering and is very easy to do. Here’s a code sample followed by a demo:
var tween = TweenMax.staggerFromTo('.animation', 0.5,
{
scale: 1,
},
{
backgroundColor: 'rgb(255, 39, 46)',
scale: 5,
rotation: 360
},
0.4 /* Stagger duration */
);
See the Pen ScrollMagic Demos - Staggering Animations by Nicholas Cerminara (@ncerminara) on CodePen.
There are even more things you can do. For example, you can animate to all the CSS properties contained in a certain class. You can also chain animations together to get even more complex and creative. This article won’t cover all that, but you can check out the docs here for more information.
ScrollMagic also lets you easily toggle as many classes as you want when the Scene is activated. This is super handy for doing some complex stuff without the additional JavaScript. For example, if we wanted to just toggle a body class to change some colors around, all we would need to do is add the following code to the Scene.
.setClassToggle('body', 'scene-is-active')
This brings so much control and power in my opinion. Check out the quick demo I put together to demonstrate the extra amount of depth you gain.
See the Pen ScrollMagic Demos - Class Toggles by Nicholas Cerminara (@ncerminara) on CodePen.
I bunched custom containers and mobile support together because they’re really one and the same. Typically on a mobile touch device, the scroll event isn’t detected until the scroll has stopped. This is unfortunate for what we’re doing. Fortunately, that only occurs when you’re scrolling on the entire body. If say you’re scrolling in an element that is set to overflow: scroll
each moment of the scroll is detected.
ScrollMagic lets you specify any container you want for your scenes by a simple option. Here’s a code example:
var scrollMagicController = new ScrollMagic({container: '#my-container'});
You can do cool things with this like putting your animations and scroll inside of a div or section of your website. Mobile support takes this exact same concept and creates a “container” or wrapper around the whole site and just binds ScrollMagic to that container.
Truth be told it’s a hackish workaround. The major downside is this kills off support for momentum scrolling. ScrollMagic’s official answer to this is to use something like iScroll.js to bring scroll momentum to touch devices. You may also be able to just add this CSS3 property to the container -webkit-overflow-scrolling: touch;
, but that obviously won’t work in all browsers. It’s entirely up to you to support this or not, but if you get it right it can really provide a seamless experience on mobile.
Lastly, if you want to just disable ScrollMagic on mobile or touch devices, you can do it easily with Modernizr:
if (!Modernizr.touch) {
// Start ScrollMagic code
}
If you’re not a fan of using another library like Modernizr just to detect touch, you can use this function:
if (!is_touch_device()) {
// Start ScrollMagic code
}
function is_touch_device() {
return 'ontouchstart' in window // works on most browsers
|| 'onmsgesturechange' in window; // works on ie10
};
I grabbed that function from this StackOverflow post. It seems to update a lot in case you want to check to see if that’s still up to date.
ScrollMagic’s official documentation and examples are amazing. I definitely recommend heading over there and checking out all the other things ScrollMagic can do. Some of those things include:
There’s definitely a lot you can do with ScrollMagic. I think it’s generally smart to use this with caution in risk of ruining a user experience with some bad animations. Complex ScrollMagic websites are probably best saved for landing pages and using this subtly is best with content based websites. It’s all about finding a balance.
]]>Beginning an application from scratch can sometimes be the hardest thing to do. Staring at an empty folder and a file with no code in it yet can be a very daunting thing.
In today’s tutorial, we will be looking at the starting setup for a Node.js, AngularJS, MongoDB, and Express application (otherwise known as MEAN). I put those in the wrong order, I know.
This will be a starting point for those that want to learn how to begin a MEAN stack application. Projects like mean.io and meanjs.org are more fully-fledged MEAN applications with many great features you’d want for a production project.
You will be able to start from absolute scratch and create a basic application structure that will allow you to build any sort of application you want.
Note: (7/8/14): Updated article for Express 4 support. Thanks to Caio Mariano for the help.
Note (10/12/14): Updated article to add Nerd model and make everything clearer.
This article has been updated to work with Express 4.0
A lot of the applications we’ve dealt with so far had a specific function, like our Node and Angular To-Do Single Page Application. We are going to step away from that and just a good old getting started application.
This will be very barebones but hopefully, it will help you set up your applications. Let’s just call it a starter kit.
This tutorial will be more based on application structure and creating a solid foundation for single-page MEAN stack applications. For more information on CRUD, authentication, or other topics in MEAN apps we’ll make sure to write other tutorials to fill those gaps.
Three letters out of the MEAN stack will be handled on the backend, our server. We will create our server, configure our application, and handle application routing.
We will need Node and to make our lives easier, we’ll use bower to pull in all our dependencies.
Bower isn’t really necessary. You could pull in all the files we need yourself (bootstrap
, angular
, angular-route
), but bower just gets them all for you! For more info, read our guide on bower to get a better understanding.
By the end of this tutorial, we will have a basic application structure that will help us develop our Node backend along with our Angular frontend. Here’s what it will look like.
- app
----- models/
---------- nerd.js <!-- the nerd model to handle CRUD -->
----- routes.js
- config
----- db.js
- node_modules <!-- created by npm install -->
- public <!-- all frontend and angular stuff -->
----- css
----- js
---------- controllers <!-- angular controllers -->
---------- services <!-- angular services -->
---------- app.js <!-- angular application -->
---------- appRoutes.js <!-- angular routes -->
----- img
----- libs <!-- created by bower install -->
----- views
---------- home.html
---------- nerd.html
---------- geek.html
----- index.html
- .bowerrc <!-- tells bower where to put files (public/libs) -->
- bower.json <!-- tells bower which files we need -->
- package.json <!-- tells npm which packages we need -->
- server.js <!-- set up our node application -->
We’ll be filling in our files into a folder structure. All backend work is done in server.js
, app
, and config
while all the frontend is handled in the public
folder.
All Node applications will start with a package.json
file so let’s begin with that.
{
"name": "starter-node-angular",
"main": "server.js",
"dependencies": {
"express" : "~4.5.1",
"mongoose" : "~3.8.0",
"body-parser" : "~1.4.2",
"method-override" : "~2.0.2"
}
}
That’s it! Now our application will know that we want to use Express and Mongoose.
Note: Since Express 4.0, body-parser
and method-override
are their own modules, which is why we have to include them here. For more information, here’s our guide to Express 4.
Express is a Node.js web application framework that will help us create our application. Mongoose is a MongoDB ORM that will help us communicate with our MongoDB database.
To install the dependencies we just setup, just go into your console and type:
npm install
You’ll see your application working to bring in those modules into the node_modules
directory that it creates.
Now that we have those, let’s set up our application in server.js
.
Since this is our starter kit for a single-page MEAN application, we are going to keep this simple. The entire code for the file is here and it is commented for help understanding.
// modules =================================================
var express = require('express');
var app = express();
var bodyParser = require('body-parser');
var methodOverride = require('method-override');
// configuration ===========================================
// config files
var db = require('./config/db');
// set our port
var port = process.env.PORT || 8080;
// connect to our mongoDB database
// (uncomment after you enter in your own credentials in config/db.js)
// mongoose.connect(db.url);
// get all data/stuff of the body (POST) parameters
// parse application/json
app.use(bodyParser.json());
// parse application/vnd.api+json as json
app.use(bodyParser.json({ type: 'application/vnd.api+json' }));
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: true }));
// override with the X-HTTP-Method-Override header in the request. simulate DELETE/PUT
app.use(methodOverride('X-HTTP-Method-Override'));
// set the static files location /public/img will be /img for users
app.use(express.static(__dirname + '/public'));
// routes ==================================================
require('./app/routes')(app); // configure our routes
// start app ===============================================
// startup our app at http://localhost:8080
app.listen(port);
// shoutout to the user
console.log('Magic happens on port ' + port);
// expose app
exports = module.exports = app;
We have now pulled in our modules, configured our application for things like database, some express settings, routes, and then started our server. Notice how we didn’t pull in mongoose
here. There’s no need for it yet. We will be using it in our model that we will define soon.
Let’s look at config
, a quick model
, and routes
since we haven’t created those yet. Those will be the last things that the backend side of our application needs.
I know it doesn’t seem like much now since we only are putting the db.js
config file here, but this was more for demonstration purposes. In the future, you may want to add more config files and call them in server.js
so this is how we will do it.
module.exports = {
url : 'mongodb://localhost/stencil-dev'
}
Now that this file is defined and we’ve called it in our server.js
using var db = require('./config/db');
, you can call any items inside of it using db.url
.
For getting this to work, you’ll want a local MongoDB database installed or you can just grab a quick one-off service like Modulus or Mongolab. Just go ahead and create an account at one of those, create a database with your own credentials, and you’ll be able to get the URL string to use in your own config file.
Next up, we’ll create a quick Mongoose model so that we can define our Nerds in our database.
This will be all that is required to create records in our database. Once we define our Mongoose model, it will let us handle creating, reading, updating, and deleting our nerds.
Let’s go into the app/models/nerd.js
file and add the following:
// grab the mongoose module
var mongoose = require('mongoose');
// define our nerd model
// module.exports allows us to pass this to other files when it is called
module.exports = mongoose.model('Nerd', {
name : {type : String, default: ''}
});
This is where we will use the Mongoose module and be able to define our Nerd model with a name attribute with data type String
. If you want more fields, feel free to add them here. Read up on the Mongoose docs to see all the things you can define.
Let’s move onto the routes and use the model we just created.
In the future, you can use the app folder to add more models, controllers, routes, and anything backend (Node) specific to your app.
Let’s get to our routes. When creating a single-page application, you will usually want to separate the functions of the backend application and the frontend application as much as possible.
To separate the duties of the separate parts of our application, we will be able to define as many routes as we want for our Node backend. This could include API routes or any other routes of that nature.
We won’t be diving into those since we’re not really handling creating an API or doing CRUD in this tutorial, but just know that this is where you’d handle those routes.
We’ve commented out the place to put those routes here.
// grab the nerd model we just created
var Nerd = require('./models/nerd');
module.exports = function(app) {
// server routes ===========================================================
// handle things like api calls
// authentication routes
// sample api route
app.get('/api/nerds', function(req, res) {
// use mongoose to get all nerds in the database
Nerd.find(function(err, nerds) {
// if there is an error retrieving, send the error.
// nothing after res.send(err) will execute
if (err)
res.send(err);
res.json(nerds); // return all nerds in JSON format
});
});
// route to handle creating goes here (app.post)
// route to handle delete goes here (app.delete)
// frontend routes =========================================================
// route to handle all angular requests
app.get('*', function(req, res) {
res.sendfile('./public/views/index.html'); // load our public/index.html file
});
};
This is where you can handle your API routes. For all other routes (*
), we will send the user to our frontend application where Angular can handle routing them from there.
Now that we have everything we need for our server to get set up! At this point, we can start our server, send a user the Angular app (index.html
), and handle 1 API route to get all the nerds.
Let’s create that index.html
file so that we can test out our server.
Let’s just open up this file and add some quick text so we can test our server.
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Starter MEAN Single Page Application</title>
</head>
<body>
we did it!
</body>
</html>
With all the backend (and a tiny frontend piece) in place, let’s start up our server. Go into your console and type:
- node server.js
Now we can go into our browser and see http://localhost:8080
in action.
So simple, and yet so beautiful. Now let’s get to the frontend single-page AngularJS stuff.
With all of our backend work in place, we can focus on the frontend. Our Node backend will send any user that visits our application to our index.html
file since we’ve defined that in our catch-all route (app.get('*')
).
The frontend work will require a few things:
We will need certain files for our application like bootstrap and of course angular. We will tell bower to grab those components for us.
Bower is a great frontend tool to manage your frontend resources. You just specify the packages you need and it will go grab them for you. Here’s an article on getting started with bower.
First, we will need Bower installed on our machine. Just type in npm install -g bower
into your console.
After you have done that, you will now have access to bower globally on your system. We will need 2 files to get Bower working for us (.bowerrc
and bower.json
). We’ll place both of these in the root of our document.
.bowerrc will tell Bower where to place our files:
{
"directory": "public/libs"
}
bower.json is similar to package.json and will tell Bower which packages are needed.
{
"name": "starter-node-angular",
"version": "1.0.0",
"dependencies": {
"bootstrap": "latest",
"font-awesome": "latest",
"animate.css": "latest",
"angular": "latest",
"angular-route": "latest"
}
}
Let’s run it! In your console, in the root of your application, type:
- bower install
You can see bower pull in all the files we needed and now we have them in public/libs
!
Now we can get down to business and work on our Angular stuff.
For our Angular application, we will want:
Let’s create the files needed for our Angular application. This will be done in public/js
. Here is the application structure for our frontend:
- public
----- js
---------- controllers
-------------------- MainCtrl.js
-------------------- NerdCtrl.js
---------- services
-------------------- NerdService.js
---------- app.js
---------- appRoutes.js
Once we have created our controllers, services, and routes, we will combine them all and inject these modules into our main app.js
file to get everything working together.
We won’t go too far in-depth here so let’s just show off all three of our controllers and their code.
angular.module('MainCtrl', []).controller('MainController', function($scope) {
$scope.tagline = 'To the moon and back!';
});
angular.module('NerdCtrl', []).controller('NerdController', function($scope) {
$scope.tagline = 'Nothing beats a pocket protector!';
});
Of course in the future, you will be doing a lot more with your controllers, but since this is more about application setup, we’ll move onto the services.
This is where you would use $http
or $resource
to do your API calls to the Node backend from your Angular frontend.
angular.module('NerdService', []).factory('Nerd', ['$http', function($http) {
return {
// call to get all nerds
get : function() {
return $http.get('/api/nerds');
},
// these will work when more API routes are defined on the Node side of things
// call to POST and create a new nerd
create : function(nerdData) {
return $http.post('/api/nerds', nerdData);
},
// call to DELETE a nerd
delete : function(id) {
return $http.delete('/api/nerds/' + id);
}
}
}]);
That’s it for our services. The only function that will work in that NerdService
is the get
function. The other two are just placeholders and they won’t work unless you define those specific routes in your app/routes.js
file. For more on building APIs, here’s a tutorial for Building a RESTful Node API.
These services will call our Node backend, retrieve data in JSON format, and then we can use it in our Angular controllers.
Now we will define our Angular routes inside of our public/js/appRoutes.js
file.
angular.module('appRoutes', []).config(['$routeProvider', '$locationProvider', function($routeProvider, $locationProvider) {
$routeProvider
// home page
.when('/', {
templateUrl: 'views/home.html',
controller: 'MainController'
})
// nerds page that will use the NerdController
.when('/nerds', {
templateUrl: 'views/nerd.html',
controller: 'NerdController'
});
$locationProvider.html5Mode(true);
}]);
Our Angular frontend will use the template file and inject it into the <div ng-view></div>
in our index.html
file. It will do this without a page refresh which is exactly what we want for a single page application.
For more information on Angular routing and templating, check out our other tutorial: Single Page Apps with AngularJS.
With all of the Angular routing ready to go, we just need to create the view files and then the smaller template files (home.html
, nerd.html
, and geek.html
) will be injected into our index.html
file inside of the <div ng-view></div>
.
Notice in our index.html
file we are calling the resources we pulled in using bower.
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<base href="/">
<title>Starter Node and Angular</title>
<!-- CSS -->
<link rel="stylesheet" href="libs/bootstrap/dist/css/bootstrap.min.css">
<link rel="stylesheet" href="css/style.css"> <!-- custom styles -->
<!-- JS -->
<script src="libs/angular/angular.min.js"></script>
<script src="libs/angular-route/angular-route.min.js"></script>
<!-- ANGULAR CUSTOM -->
<script src="js/controllers/MainCtrl.js"></script>
<script src="js/controllers/NerdCtrl.js"></script>
<script src="js/services/NerdService.js"></script>
<script src="js/appRoutes.js"></script>
<script src="js/app.js"></script>
</head>
<body ng-app="sampleApp" ng-controller="NerdController">
<div class="container">
<!-- HEADER -->
<nav class="navbar navbar-inverse">
<div class="navbar-header">
<a class="navbar-brand" href="/">Stencil: Node and Angular</a>
</div>
<!-- LINK TO OUR PAGES. ANGULAR HANDLES THE ROUTING HERE -->
<ul class="nav navbar-nav">
<li><a href="/nerds">Nerds</a></li>
</ul>
</nav>
<!-- ANGULAR DYNAMIC CONTENT -->
<div ng-view></div>
</div>
</body>
</html>
<!-- public/views/home.html -->
<div class="jumbotron text-center">
<h1>Home Page 4 Life</h1>
<p>{{ tagline }}</p>
</div>
<!-- public/views/nerd.html -->
<div class="jumbotron text-center">
<h1>Nerds and Proud</h1>
<p>{{ tagline }}</p>
</div>
We have defined our resources, controllers, services, and routes and included the files in our index.html
. Now let’s make them all work together.
Let’s set up our Angular app to use all of our components. We will use dependency injection and set up our Angular application.
angular.module('sampleApp', ['ngRoute', 'appRoutes', 'MainCtrl', 'NerdCtrl', 'NerdService']);
Now we have an application that has a Node.js backend and an AngularJS frontend. We can use this foundation to build any sort of application moving forward. We can add authentication and CRUD functionality to create a good application.
Also, for those looking for this project with the addition of the Jade templating engine, Florian Zemke has created a Jade version at his GitHub repo.
Moving forward, I’d encourage you to take this and see if it fits your needs. The point of this was to have a foundation for starting applications so that we aren’t reinventing the wheel every time we start a new project.
This is a very barebones example and for something more in-depth, I’d encourage people to take a look at mean.io for a more in-depth starter application.
Check out the GitHub repo for this project and take from it what you need. Sound off in the comments if you have any questions about how to expand this into your own applications.
We’ve put this tutorial together as a starter kit at the GitHub repo. We’ll keep adding features to it on request and any updates we think will be helpful for applications.
Hopefully, it will be a good foundation for any sort of Single Page MEAN Stack Application.
npm install
bower install
node server.js
http://localhost:8080
Further Reading: When building MEAN stack apps, the backend Node application will usually be an API that we build. This will allow the Angular frontend to consume our API that we built through Angular services. The next step is to hash out building a Node API. This next tutorial will teach us that and then we can go further in-depth on how to build the frontend Angular application to consume our new API.
This article is part of our Getting MEAN series. Here are the other articles.
In this tutorial, we will go through the process of creating a plugin for WordPress. A WordPress plugin extends the WordPress core and is intended to be reusable code or functionality across multiple projects. This is one of the most interesting things about it - you can share your code or functionality on the web.
I am sure many, if not all of you, have already searched for a plugin in the WordPress repository or any of the available market places. This is one of the reasons why WordPress is so widely used. It’s extremely extensible and has a huge community of developers around it. As of today, there are more than 39,000 publicly available free plugins on the WordPress repository.
The plugin we are going to make in this tutorial will help automate some of the most usual functions developers do when creating a new WordPress project. By the end of this tutorial you will know how to:
You might be thinking that it would be easier and faster to just copy and paste code from your last project and not to even bother with writing a custom plugin to do this. Well, this is where you are wrong!
This example will demonstrate the benefits of a WordPress plugin’s purpose by eliminating repetition. All you’ll need to do is add your plugin, change the options, and be on your way. You won’t need to worry that you forgot a function or anything because it’s all self-contained to a single plugin.
The best part about building a WordPress plugin is joining the WordPress open-source community. You can share and get feedback on your work, sell it as a premium plugin, and add it to a browsable marketplace.
Above is a screenshot of the final plugin we’re building. As mentioned earlier, it groups a handful of functions and settings you would usually add to each new project.
Some of cool things that we’re going to be able to do are:
The best way to begin with a new plugin is by working on the incredibly useful WordPress Plugin Boilerplate. You might ask yourself why you would use a boilerplate instead of building from scratch. This boilerplate will get you started quick with a standardized, organized and object-oriented foundation - basically everything you want if you started from scratch.
To get started, just go to the WordPress Plugin Boilerplate Generator and fill-out the form and click on the Build button.
You now have just downloaded the generated plugin boilerplate as a .zip
file. Now, simply unzip it and add it to your WordPress development installation in the plugins
folder.
You might want to have a dedicated local environment for testing your plugins. We recommend using either MAMP/XAMP or using a LAMP vagrant box like the awesome Scotch Box. You should also make sure to turn on the debug functionalities of WordPress by adding the following to your wp-config.php
file:
define('WP_DEBUG', true)
This will help us check for any errors while coding our plugin.
Now that the boilerplate of our plugin is ready and installed, let’s review a bit about the plugin folder structure before we begin with coding it.
First thing you might notice, is that we have 4 main folders:
The folder admin
is where all our admin facing code will live; including CSS, JS and partials folders and our PHP admin class file class-wp-cbf-admin.php
.
Here you will find:
class-wp-cbf.php
where we will add all our actions and filters.class-wp-cbf-activator.php
.class-wp-cbf-desactivator.php
, the internationalization file class-wp-cbf-i18n.php
class-wp-cbf-loader.php
which will basically call all our actions in the main class file.languages
folder which is a ready to use .pot
file to make your plugin in muliple languages.public
folder is the same as our admin
folder except for public facing functionalities.This leaves us with 4 files:
LICENCE.txt
: GPL-2 license.README.txt
: This will include your plugin name, compatibility version, and description as seen on the plugin page in the WordPress repository. This is the first file we will edit.uninstall.php
: This script is called when the user clicks on the Delete
link in the WordPress plugin backend.wp-cbf.php
: This is the main plugin bootstrap file. You will likely edit this file with the version number and the short description of your plugin.Now that all this is cleared, it’s time to get our hands dirty. Let’s add some code to our brand new plugin!
If you go to the plugins page in your WordPress back-end, you will see our plugin with its title, a description, and Activate
, Edit
and Delete
links.
If you click on Activate
, it will work thanks to the activator
and deactivator
classes in the includes
folder. This is great, but once activated, nothing really will happen yet.
We need to add a settings page where we will add our plugin options. You might also notice here that we still have a very generic description - let’s fix that first.
This short description is written in the comments of the main plugin class: wp-cbf/wp-cbf.php
Since we are at the root of our plugin, let’s update the README.txt
file. You will want this to be pretty detailed explanation, especially since this is what people will see when they reach your plugin webpage. You’ll also notice installation and FAQ sections. The more you cover here, the less you might need to explain during possible support later.
If you reload your Plugins admin page now, you will see your new description.
Next, let’s add a setting page so we will be able to edit our plugin’s options.
Open the admin/class-wp-cbf-admin.php
where we have 3 functions already here:
__construct
which is instantiated whenever this class is calledenqueue_styles
and enqueue_scripts
which are used where we will add our admin related CSS and JSAfter these functions, add these following 3 functions. You don’t need to add the huge comment blocks since they’re just there to help you.
/**
*
* admin/class-wp-cbf-admin.php - Don't add this
*
**/
/**
* Register the administration menu for this plugin into the WordPress Dashboard menu.
*
* @since 1.0.0
*/
public function add_plugin_admin_menu() {
/*
* Add a settings page for this plugin to the Settings menu.
*
* NOTE: Alternative menu locations are available via WordPress administration menu functions.
*
* Administration Menus: http://codex.wordpress.org/Administration_Menus
*
*/
add_options_page( 'WP Cleanup and Base Options Functions Setup', 'WP Cleanup', 'manage_options', $this->plugin_name, array($this, 'display_plugin_setup_page')
);
}
/**
* Add settings action link to the plugins page.
*
* @since 1.0.0
*/
public function add_action_links( $links ) {
/*
* Documentation : https://codex.wordpress.org/Plugin_API/Filter_Reference/plugin_action_links_(plugin_file_name)
*/
$settings_link = array(
'<a href="' . admin_url( 'options-general.php?page=' . $this->plugin_name ) . '">' . __('Settings', $this->plugin_name) . '</a>',
);
return array_merge( $settings_link, $links );
}
/**
* Render the settings page for this plugin.
*
* @since 1.0.0
*/
public function display_plugin_setup_page() {
include_once( 'partials/wp-cbf-admin-display.php' );
}
Let’s review and explain those 3 functions:
add_plugin_admin_menu()
add_plugin_admin_menu()
, as its name says, will add a menu item in the Settings
sub-menu items. This is called by the add_options_page()
. This function takes five arguments:
$this->plugin_name
).display_plugin_setup_page()
. This is where our options will be displayed.add_action_links()
This function adds a “Settings” link to the “Deactivate | Edit” list when our plugin is activated. It takes one argument, the $links
array to which we will merge our new link array.
display_plugin_setup_page()
This one is called inside our first add_plugin_admin_menu()
function. It just includes the partial file where we will add our Options. It will be mostly HTML and some little PHP logic.
All this is great, but if you just save that file and go back to your plugin page, nothing new will appear yet. We first need to register these functions into your define_admin_hook
.
Go to the includes
folder and open includes/class-wp-cbf.php
. We need to add the following define_admin_hooks()
private function to get this started:
/**
*
* include/class-wp-cbf.php - Don't add this
*
**/
// Add menu item
$this->loader->add_action( 'admin_menu', $plugin_admin, 'add_plugin_admin_menu' );
// Add Settings link to the plugin
$plugin_basename = plugin_basename( plugin_dir_path( __DIR__ ) . $this->plugin_name . '.php' );
$this->loader->add_filter( 'plugin_action_links_' . $plugin_basename, $plugin_admin, 'add_action_links' );
Each one of these lines are calling the loader file, actions, or filter hooks. From the includes/wp-cbf-loader.php
file, we can get the way we have to add our arguments for example for the first action:
$hook
('admin_menu'
), this is the action/filter hook we will add our modifications to$component
($plugin_admin
), this is a reference to the instance of the object on which the action is defined, more simply, if we had a hook to the admin_hooks
it will be $plugin_admin
on the public hooks it will be $plugin_public
$callback
(add_plugin_admin_menu
), the name of our function$priority
(not set here - default is 10
), priority at which the function is fired with the default being 10$accepted_args
(not set here - default is 1
), number of arguments passed to our callback functionYou can also see that we are setting up a $plugin_basename
variable. It will give us the plugin main class file and is needed to add the action_links
.
Now, if you refresh your plugins admin page and activate the plugin you will now see the “Settings” link and also the menu link in there.
Now we have a page to display our settings and that’s pretty good, but it’s empty. You can verify that by jumping on this page by either clicking on the “Settings” link on the “WP Cleanup” menu item.
Before you go and add all your options fields, you might want to write all your plugin options on paper with the type of field you will add. For this particular plugin, most of these will be checkboxes to enable/disable functionalities, a couple of text inputs, selects that we will cover below, and some other very specific fields (color-pickers and image uploads that we will talk about in part 2.
I would also recommend using another utility plugin to grab all the admin-specific markup that we will use. It’s not available on the WordPress repository, so you will need to get it from GitHub: WordPress Admin Style
Now, with our list of fields and some admin related markup, we can go on and add our first inputs. For our plugin’s purpose, we will be adding 4 checkboxes to start.
Open admin/partials/wp-cbf-admin-display.php
since it’s the file that will display our settings page (as stated in our add_options_page()
). Now add the following:
<?php
/**
*
* admin/partials/wp-cbf-admin-display.php - Don't add this comment
*
**/
?>
<!-- This file should primarily consist of HTML with a little bit of PHP. -->
<div class="wrap">
<h2><?php echo esc_html(get_admin_page_title()); ?></h2>
<form method="post" name="cleanup_options" action="options.php">
<!-- remove some meta and generators from the <head> -->
<fieldset>
<legend class="screen-reader-text"><span>Clean WordPress head section</span></legend>
<label for="<?php echo $this->plugin_name; ?>-cleanup">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-cleanup" name="<?php echo $this->plugin_name; ?> [cleanup]" value="1"/>
<span><?php esc_attr_e('Clean up the head section', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- remove injected CSS from comments widgets -->
<fieldset>
<legend class="screen-reader-text"><span>Remove Injected CSS for comment widget</span></legend>
<label for="<?php echo $this->plugin_name; ?>-comments_css_cleanup">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-comments_css_cleanup" name="<?php echo $this->plugin_name; ?>[comments_css_cleanup]" value="1"/>
<span><?php esc_attr_e('Remove Injected CSS for comment widget', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- remove injected CSS from gallery -->
<fieldset>
<legend class="screen-reader-text"><span>Remove Injected CSS for galleries</span></legend>
<label for="<?php echo $this->plugin_name; ?>-gallery_css_cleanup">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-gallery_css_cleanup" name="<?php echo $this->plugin_name; ?>[gallery_css_cleanup]" value="1" />
<span><?php esc_attr_e('Remove Injected CSS for galleries', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- add post,page or product slug class to body class -->
<fieldset>
<legend class="screen-reader-text"><span><?php _e('Add Post, page or product slug to body class', $this->plugin_name); ?></span></legend>
<label for="<?php echo $this->plugin_name; ?>-body_class_slug">
<input type="checkbox" id="<?php echo $this->plugin_name;?>-body_class_slug" name="<?php echo $this->plugin_name; ?>[body_class_slug]" value="1" />
<span><?php esc_attr_e('Add Post slug to body class', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- load jQuery from CDN -->
<fieldset>
<legend class="screen-reader-text"><span><?php _e('Load jQuery from CDN instead of the basic wordpress script', $this->plugin_name); ?></span></legend>
<label for="<?php echo $this->plugin_name; ?>-jquery_cdn">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-jquery_cdn" name="<?php echo $this->plugin_name; ?>[jquery_cdn]" value="1" />
<span><?php esc_attr_e('Load jQuery from CDN', $this->plugin_name); ?></span>
</label>
<fieldset>
<p>You can choose your own cdn provider and jQuery version(default will be Google Cdn and version 1.11.1)-Recommended CDN are <a href="https://cdnjs.com/libraries/jquery">CDNjs</a>, <a href="https://code.jquery.com/jquery/">jQuery official CDN</a>, <a href="https://developers.google.com/speed/libraries/#jquery">Google CDN</a> and <a href="http://www.asp.net/ajax/cdn#jQuery_Releases_on_the_CDN_0">Microsoft CDN</a></p>
<legend class="screen-reader-text"><span><?php _e('Choose your prefered cdn provider', $this->plugin_name); ?></span></legend>
<input type="url" class="regular-text" id="<?php echo $this->plugin_name; ?>-cdn_provider" name="<?php echo $this->plugin_name; ?>[cdn_provider]" value=""/>
</fieldset>
</fieldset>
<?php submit_button('Save all changes', 'primary','submit', TRUE); ?>
</form>
</div>
This code will generate a form and a couple of checkboxes.
If you try to check one of these checkboxes now and hit save, you will get redirected to the options.php
page. This is because if you look at our form, the action
attribute is linked to options.php
. So let’s go on and save those options.
At this point, you might be thinking that before saving any of these options, that we should probably be first validating and sanitizing them. Well that’s exaclty what we’re going to do.
So let’s validate and sanitize those options:
Let’s open admin/class-wp-cbf.php
in our editor and add a new validation function. So after our display_plugin_setup_page()
function jump a couple of lines and add the following:
/**
*
* admin/class-wp-cbf-admin.php
*
**/
public function validate($input) {
// All checkboxes inputs
$valid = array();
//Cleanup
$valid['cleanup'] = (isset($input['cleanup']) && !empty($input['cleanup'])) ? 1 : 0;
$valid['comments_css_cleanup'] = (isset($input['comments_css_cleanup']) && !empty($input['comments_css_cleanup'])) ? 1: 0;
$valid['gallery_css_cleanup'] = (isset($input['gallery_css_cleanup']) && !empty($input['gallery_css_cleanup'])) ? 1 : 0;
$valid['body_class_slug'] = (isset($input['body_class_slug']) && !empty($input['body_class_slug'])) ? 1 : 0;
$valid['jquery_cdn'] = (isset($input['jquery_cdn']) && !empty($input['jquery_cdn'])) ? 1 : 0;
$valid['cdn_provider'] = esc_url($input['cdn_provider']);
return $valid;
}
As you can see here, we just created a function called validate
, and we are passing it an $input
argument. We then add some logic for the checkboxes to see if the input is valid.
We’re doing this with isset
and !empty
which checks for us if the checkbox as been checked or not. It will assign the valid[]
array the value we get from that verification. We also checked our url
input field with the esc_url
for a simple text field. We used a sanitize_text_field
instead, but the process is the same.
We are now going to add the saving/update function for our options.
In the same file, right before the previous code, add:
/**
*
* admin/class-wp-cbf-admin.php
*
**/
public function options_update() {
register_setting($this->plugin_name, $this->plugin_name, array($this, 'validate'));
}
Here we use the register_setting()
function which is part of the WordPress API. We are passing it three arguments:
$plugin_name
as it’s unique and safe.$plugin_name
again.Now that we have registered our settings, we need to add a small line of php to our form in order to get it working properly. This line will add a nonce
, option_page
, action
, and a http_referer
field as hidden inputs.
So open up the form and update it so it look like the below code:
<?php
/**
*
* admin/partials/wp-cbf-admin-display.php - Don't add this comment
*
**/
?>
<div class="wrap">
<h2><?php echo esc_html( get_admin_page_title() ); ?></h2>
<form method="post" name="cleanup_options" action="options.php">
<?php settings_fields($this->plugin_name); ?>
<!-- This file should primarily consist of HTML with a little bit of PHP. -->
...
Great - we are almost there! We’re just missing one last step. We need to register the options_update()
to the admin_init
hook.
Open includes/class-wp-cbf.php
and register our new action:
/**
*
* include/class-wp-cbf.php
*
**/
// Save/Update our plugin options
$this->loader->add_action('admin_init', $plugin_admin, 'options_update');
Let’s try our option page now. On save, the page should refresh, and you should see a notice saying “Settings saved”.
Victory is ours!
But wait… If you had a checkbox checked, it’s no longer showing as checked now…
It because we now just need to grab our “options” values and add a small condition to our inputs to reflect this.
Open again the admin/partials/wp-cbf-admin.php
file and update it as follow
<h2 class="nav-tab-wrapper">Clean up</h2>
<form method="post" name="cleanup_options" action="options.php">
<?php
//Grab all options
$options = get_option($this->plugin_name);
// Cleanup
$cleanup = $options['cleanup'];
$comments_css_cleanup = $options['comments_css_cleanup'];
$gallery_css_cleanup = $options['gallery_css_cleanup'];
$body_class_slug = $options['body_class_slug'];
$jquery_cdn = $options['jquery_cdn'];
$cdn_provider = $options['cdn_provider'];
?>
<?php
settings_fields($this->plugin_name);
do_settings_sections($this->plugin_name);
?>
<!-- remove some meta and generators from the <head> -->
<fieldset>
<legend class="screen-reader-text">
<span>Clean WordPress head section</span>
</legend>
<label for="<?php echo $this->plugin_name; ?>-cleanup">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-cleanup" name="<?php echo $this->plugin_name; ?>[cleanup]" value="1" <?php checked($cleanup, 1); ?> />
<span><?php esc_attr_e('Clean up the head section', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- remove injected CSS from comments widgets -->
<fieldset>
<legend class="screen-reader-text"><span>Remove Injected CSS for comment widget</span></legend>
<label for="<?php echo $this->plugin_name; ?>-comments_css_cleanup">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-comments_css_cleanup" name="<?php echo $this->plugin_name; ?>[comments_css_cleanup]" value="1" <?php checked($comments_css_cleanup, 1); ?> />
<span><?php esc_attr_e('Remove Injected CSS for comment widget', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- remove injected CSS from gallery -->
<fieldset>
<legend class="screen-reader-text"><span>Remove Injected CSS for galleries</span></legend>
<label for="<?php echo $this->plugin_name; ?>-gallery_css_cleanup">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-gallery_css_cleanup" name="<?php echo $this->plugin_name; ?>[gallery_css_cleanup]" value="1" <?php checked( $gallery_css_cleanup, 1 ); ?> />
<span><?php esc_attr_e('Remove Injected CSS for galleries', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- add post,page or product slug class to body class -->
<fieldset>
<legend class="screen-reader-text"><span><?php _e('Add Post, page or product slug to body class', $this->plugin_name); ?></span></legend>
<label for="<?php echo $this->plugin_name; ?>-body_class_slug">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-body_class_slug" name="<?php echo $this->plugin_name; ?>[body_class_slug]" value="1" <?php checked($body_class_slug, 1); ?> />
<span><?php esc_attr_e('Add Post slug to body class', $this->plugin_name); ?></span>
</label>
</fieldset>
<!-- load jQuery from CDN -->
<fieldset>
<legend class="screen-reader-text"><span><?php _e('Load jQuery from CDN instead of the basic wordpress script', $this->plugin_name); ?></span></legend>
<label for="<?php echo $this->plugin_name; ?>-jquery_cdn">
<input type="checkbox" id="<?php echo $this->plugin_name; ?>-jquery_cdn" name="<?php echo $this->plugin_name; ?>[jquery_cdn]" value="1" <?php checked($jquery_cdn,1); ?>/>
<span><?php esc_attr_e('Load jQuery from CDN', $this->plugin_name); ?></span>
</label>
<fieldset>
<p>You can choose your own cdn provider and jQuery version(default will be Google Cdn and version 1.11.1)-Recommended CDN are <a href="https://cdnjs.com/libraries/jquery">CDNjs</a>, <a href="https://code.jquery.com/jquery/">jQuery official CDN</a>, <a href="https://developers.google.com/speed/libraries/#jquery">Google CDN</a> and <a href="http://www.asp.net/ajax/cdn#jQuery_Releases_on_the_CDN_0">Microsoft CDN</a></p>
<legend class="screen-reader-text"><span><?php _e('Choose your prefered cdn provider', $this->plugin_name); ?></span></legend>
<input type="url" class="regular-text" id="<?php echo $this->plugin_name; ?>-cdn_provider" name="<?php echo $this->plugin_name; ?>[cdn_provider]" value="<?php if(!empty($cdn_provider)) echo $cdn_provider; ?>"/>
</fieldset>
</fieldset>
<?php submit_button('Save all changes', 'primary','submit', TRUE); ?>
So what we’re doing is basically checking to see if the value exists already, and, if it does, populating the input field with the current value.
We do this by first grabbing all our options and assigning each one to a variable (try to keep those explicit so you know which is which).
Then we add a small condition. We will use the WordPress built-in checked
function on our inputs to get the saved value and add the “checked” attribute if the option exists and is set to 1.
So save your file, try to save your plugin once last time, and, boom!, we have successfully finished our plugin.
We have seen a lots of things. From the benefits of creating your own plugin and sharing it with fellow WordPress users to why you might want to make your repetitive functions into a plugin. We have reviewed the incredible WordPress Plugin Boilerplate, its structure, and why you should definitely use it.
We put our hands in the grease and pushed ourselves in the first steps of doing a plugin, with 2 types of fields validation and sanitization, all that keeping a Oriented Object PHP process with clean and explicit code. We’re not finished yet though.
In part 2, we will make our plugin alive, creating the functions that will actually influence your WordPress website, we will also discover more complex field types and sanitization, and, finally, get our plugin ready to be reviewed by the WordPress repository team.
We’ll wrap this up with some additional links and sources:
]]>In this post, we are going to create a Create React App application, then add configuration to debug it in Visual Studio Code.
For a Create React App application, install the Debugger for Chrome extension, create a debug configuration in VS Code, and then run in debug mode.
To be able to test a Create React App application, you will need a Create React App application. I’ll provide the basic steps, but for more reference on how to get started look at the Create React App page.
First, you’ll need to install the Create React App.
- npm install -g create-react-app
After that finishes, you’ll need to actually generate your new application. This will take a bit as it needs to install lots of npm packages.
- create-react-app my-app
Open the project in VS Code and you should see the following.
Now, that you’ve got your new fancy React app, go ahead and run it to make sure everything looks right.
- npm start
Should look like this.
Assuming you’ve made it this far, we are ready to start debugging! Before we do, however, it’s worth understanding how configuring debugging in VS Code works. Basically, debug configurations are saved in a launch.json
file which is stored inside of a .vscode
folder. This .vscode
folder is used to store different configurations for Code including our required debugging stuff.
Before you create your debug configuration, you need to install the Debugger for Chrome extension. Find and install this extension from the extension tab in VS Code. After installing, reload VS Code.
Now, to create a debug configuration, you can open the debug panel (the bug-looking button on the left panel). At the top of the debug panel, you should see a dropdown that says “No Configurations”.
To the right of that dropdown, there is a gear icon. Click this button to have VS Code automatically generate that .vscode
folder and launch.json
file mentioned above.
Then choose Chrome.
You should get the following configuration created for you.
The only thing we need to do is update the port from 8080 to 3000.
Now we’re ready! Go ahead and click the play button at the top of the Debug panel which will launch an instance of Chrome in debug mode. Keep in mind your app should already be running from using ng serve earlier. In VS Code, you should see the Debug toolbar pop up.
With this up and running, you can set a breakpoint in your App.js. Open up your App.js and add a breakpoint inside of the render function by clicking in the gutter (to the left of the line numbers). Should look like this.
Now, refresh debugging by clicking the refresh button on the debugging toolbar. This should open your application again and trigger this breakpoint. You should be directed back to VS Code directly to the place where you set your breakpoint.
From here, you can set more breakpoints, inspect variables, etc. If you are interested in learning more about debugging JavaScript in general in either Chrome or VS Code you can check out Debugging JavaScript in Chrome and Visual Studio Code.
If you have any follow-up questions or comments, leave one below or find me on Twitter @jamesqquick.
For video content, check out my YouTube Channel
]]>In this tutorial, I will be teaching how to upload files in Angular 2+.
Throughout this tutorial, Angular means Angular version greater than 2.x unless stated otherwise.
In this tutorial, I will also help you all create a server script that handles the file uploads.
I will teach two methods of file uploads using Angular.
The first method entails using the ng2-file-upload
package, while the second method is handling the uploads without using any third-party package.
This is what the app will look like when we are done building.
For the server-side, we will be using Node.js (Express) for the script that handles the upload.
To get started, we will need to install express generator, to take care of the configurations, and make our work much easier.
So we run this command.
- sudo npm install -g express-generator
Once Express generator has been installed, it’s time to create our application.
So we run the following command.
- express -e angular-file
After creating the application, we would need to move into the directory, and run npm install
- cd angular-file
- npm install
At this point, if we run ng start
command, we should be able to see the default Express page.
The next step would be to install Multer
. Multer is a package for handling file uploads in Express. To install Mutler, we run the following command.
- npm install multer --save
At this point, we have multer
installed, we would need to use it in the route handling our upload function.
Let us open up our routes/index.js
and replace it with the following:
//require express library
var express = require('express');
//require the express router
var router = express.Router();
//require multer for the file uploads
var multer = require('multer');
// set the directory for the uploads to the uploaded to
var DIR = './uploads/';
//define the type of upload multer would be doing and pass in its destination, in our case, its a single file with the name photo
var upload = multer({dest: DIR}).single('photo');
/* GET home page. */
router.get('/', function(req, res, next) {
// render the index page, and pass data to it.
res.render('index', { title: 'Express' });
});
//our file upload function.
router.post('/', function (req, res, next) {
var path = '';
upload(req, res, function (err) {
if (err) {
// An error occurred when uploading
console.log(err);
return res.status(422).send("an Error occured")
}
// No error occured.
path = req.file.path;
return res.send("Upload Completed for "+path);
});
})
module.exports = router;
In the above route file, we imported the mutler
library, Created a variable DIR
, that holds the destination point, we then define an upload variable, that holds the mutler upload function
telling it that it would be uploading a single file, with the name photo
.
In our post route, we call the upload function, acting as a middleware, adding a callback, so we can know if the file was uploaded or not.
Once the file has been uploaded, mutler
provides an interface for us to get the location of the file that has been uploaded, using the req.file.path
, which we assign to a variable, and return it as the success message.
At this point, however, if we try to access the route from any client and upload files to it, it would give a cors origin blocked error
, which of cause, is right, as we are going to be calling the upload API from another domain.
However, there’s a fix for that.
Locate your app.js
file in the root folder, and lets locate the line that says app.use('/', routes)
and just before that line, lets add the following:
//create a cors middleware
app.use(function(req, res, next) {
//set headers to allow cross origin request.
res.header("Access-Control-Allow-Origin", "*");
res.header('Access-Control-Allow-Methods', 'PUT, GET, POST, DELETE, OPTIONS');
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
What we have done above, is to create a middleware, that adds the cors origin header to the response, as well as allowed methods for that origin.
At this point, we can hit CTRL+C
on the terminal, and then run npm start
again to reload changes.
Our server script is now ready to receive and upload files to our root upload folder.
Now it’s time we move to create the Angular project that does the file upload.
Let’s move into the public
folder, and we would use the angular-cli
to create a new Angular project
So we run the below command to install the angular-cli
which is a Command Line Interface for developing Angular apps.
So in our terminal, we run the following command to install it and create a new project.
- npm install -g angular-cli
Change directory to the public folder of our working directory
- cd public
Create a new angular project called testupload
.
- ng new testupload
Change directory to the test upload folder.
- cd testupload
Serve the angular application.
- ng serve
At this point, we see that npm packages are being installed, and as such, we wait till they are done.
Once they have been installed, we can run the ng serve
to serve our application for us to see.
Extra Reading on Angular CLI - Use the Angular CLI For Faster Angular 2 Projects.
Now it’s time to create the actual file upload.
Method 1. Using the ng2-file-upload package.
Now after we have served the application, we would see a screen like this when we navigate to localhost:4200
.
Now, let’s run the following command to install the ng2-file-upload
package.
- npm i ng2-file-upload --save
This installs the module into our node-modules folder, and as well saves it into our JSON file.
Now let’s head over to the file in src/app/app.component.ts
We would replace the contents of the file with this one below.
// import component and the oninit method from angular core
import { Component, OnInit } from '@angular/core';
// import the file uploader plugin
import { FileUploader } from 'ng2-file-upload/ng2-file-upload';
// define the constant url we would be uploading to.
const URL = 'http://localhost:8000/api/upload';
// create the component properties
@Component({
// define the element to be selected from the html structure.
selector: 'app-root',
// location of our template rather than writing in-line templates.
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {
// declare a property called fileuploader and assign it to an instance of a new fileUploader.
// pass in the Url to be uploaded to, and pass the itemAlias, which would be the name of the //file input when sending the post request.
public uploader:FileUploader = new FileUploader({url: URL, itemAlias: 'photo'});
// This is the default title property created by the angular CLI. It is responsible for the app works
title = 'app works!';
ngOnInit() {
// override the onAfterAddingfile property of the uploader so it doesn't authenticate with
// credentials.
this.uploader.onAfterAddingFile = (file)=> { file.withCredentials = false; };
// overide the onCompleteItem property of the uploader so we are
// able to deal with the server response.
this.uploader.onCompleteItem = (item:any, response:any, status:any, headers:any) => {
console.log("ImageUpload:uploaded:", item, status, response);
};
}
}
Here, we import component, alongside the OnInit
class, so we can implement the ngOnInit
function, which serve like a type of constructor for our component.
Then we import the FileUploader
class.
We then define a constant that holds the URL we are uploading to.
In our AppComponent
class, we define a public
property called uploader
, and assign it to an instance of the file uploader, passing along our URL
and an extra itemAlias
property which we would call the File Input.
The itemAlias
property refers to the name we would like to call out file input.
onAfterAddingFile
FunctionWe then call the ngOnit
function, where we override two of the uploaders function.
The first function we override is the onAfterAddingFile
function which is triggered after a file has been chosen, and we set the credentials to the file to be false. i.e., we are not authenticating with credentials.
The next function we override is the onCompleteItem
function. The reason we override this is so we can get the response from the server.
In our case, we just console log the status, response, and the item.
Now we move into our HTML file and replace it with the following content, so we can add the input type.
<h1>
<!-- here we echo the title from the component -->
{{title}}
</h1>
<!-- File input for the file-upload plugin, with special ng2-file-upload directive called ng2FileSelect -->
<input type="file" name="photo" ng2FileSelect [uploader]="uploader" />
<!-- button to trigger the file upload when submitted -->
<button type="button" class="btn btn-success btn-s"
(click)="uploader.uploadAll()"
[disabled]="!uploader.getNotUploadedItems().length">
Upload with ng-2 file uploader
</button>
So we create an input of type file and we attach the ng2FileSelect
directive to it, which makes it possible to bind the uploader attribute which it provides to our own uploader.
Then we create a button that is disabled if there is no item in the upload queue and has a click function to upload all files in the queue.
However, if we save our file at this time and run it, we would run into errors.
Ng2FileSelect
DirectiveWe would have to add a declaration to our app.module.ts
, so we can use the ng2FileSelect
directive.
So we add this line to the top of our app.module.ts
:
import { FileSelectDirective } from 'ng2-file-upload';
And we also add the FileSelectDirective
to our declarations
declarations: [
AppComponent,
FileSelectDirective
],
So our app.module.ts
should look this way.
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpModule } from '@angular/http';
import { FileSelectDirective } from 'ng2-file-upload';
import { AppComponent } from './app.component';
@NgModule({
declarations: [
AppComponent,
FileSelectDirective
],
imports: [
BrowserModule,
FormsModule,
HttpModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
Now if we launch our application and upload, we should see that our file is sent to the server.
So on the Root folder, if we go to uploads
folder, we should see that our images are being uploaded. Voilà we just uploaded files using Angular.
However, there is a second method if we do not want to use the above plugin which requires using the underlying form-data to create the form data.
Let us create an extra file input and button in our app.component.html
,
So our HTML structure looks this way.
<h1>
<!-- here we echo the title from the component -->
{{title}}
</h1>
<!-- File input for the file-upload plugin, with special ng2-file-upload directive called ng2FileSelect -->
<input type="file" name="photo" ng2FileSelect [uploader]="uploader" />
<!-- button to trigger the file upload when submitted -->
<button type="button" class="btn btn-success btn-s"
(click)="uploader.uploadAll()"
[disabled]="!uploader.getNotUploadedItems().length">
Upload with ng-2 file uploader
</button>
<!-- File input for upload without using the plugin. -->
<input id="photo" type="file" />
<!-- button to trigger the file upload when submitted -->
<button type="button" class="btn btn-success btn-s" (click)="upload()">
Upload with method 2
</button>
Note that I have added another file input with an id of photo, and another button that has a click event to upload.
Now let’s create the upload function that handles the file upload.
Copy and replace your app component.ts
with this.
// import component, ElementRef, input and the oninit method from angular core
import { Component, OnInit, ElementRef, Input } from '@angular/core';
// import the file-upload plugin
import { FileUploader } from 'ng2-file-upload/ng2-file-upload';
// import the native angular http and respone libraries
import { Http, Response } from '@angular/http';
// import the do function to be used with the http library.
import "rxjs/add/operator/do";
// import the map function to be used with the http library
import "rxjs/add/operator/map";
const URL = 'http://localhost:8000/api/upload';
//create the component properties
@Component({
// define the element to be selected from the html structure.
selector: 'app-root',
// location of our template rather than writing inline templates.
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {
// declare a property called fileuploader and assign it to an instance of a new fileUploader.
// pass in the URL to be uploaded to, and pass the itemAlias, which would be the name of the //file input when sending the post request.
public uploader:FileUploader = new FileUploader({url: URL, itemAlias: 'photo'});
// This is the default title property created by the angular CLI. It is responsible for the app works
title = 'app works!';
ngOnInit() {
// override the onAfterAddingfile property of the uploader so it doesn't authenticate with
// credentials.
this.uploader.onAfterAddingFile = (file)=> { file.withCredentials = false; };
// overide the onCompleteItem property of the uploader so we are
// able to deal with the server response.
this.uploader.onCompleteItem = (item:any, response:any, status:any, headers:any) => {
console.log("ImageUpload:uploaded:", item, status, response);
};
}
// declare a constroctur, so we can pass in some properties to the class, which can be
// accessed using the this variable
constructor(private http: Http, private el: ElementRef) {
}
// the function which handles the file upload without using a plugin.
upload() {
// locate the file element meant for the file upload.
let inputEl: HTMLInputElement = this.el.nativeElement.querySelector('#photo');
// get the total amount of files attached to the file input.
let fileCount: number = inputEl.files.length;
// create a new fromdata instance
let formData = new FormData();
// check if the filecount is greater than zero, to be sure a file was selected.
if (fileCount > 0) { // a file was selected
// append the key name 'photo' with the first file in the element
formData.append('photo', inputEl.files.item(0));
// call the angular http method
this.http
// post the form data to the url defined above and map the response. Then subscribe
// to initiate the post. if you don't subscribe, angular wont post.
.post(URL, formData).map((res:Response) => res.json()).subscribe(
// map the success function and alert the response
(success) => {
alert(success._body);
},
(error) => alert(error)
)
}
}
}
What has changed?
In this updated version, I have imported ElementRef
and Input
from @angular/core
.
I also imported the HTTP and response from the Angular HTTP library.
I also went ahead to import the map and do rx’s functions to be used with our HTTP class.
In the app component class, two things were added.
1.) A constructor 2.) The upload function.
In the constructor, we pass in our HTTP and element ref instances, so they can be accessed by this.http
and this.el
.
The upload function here is where the work lies.
We declare inputel
, which is of type htmlinputelement
, and we set it to the instance of the file input we created with an id of photo using the nativelement.queryselector
of the el
.
We then declare a variable filecount
of type number and set it to the length of the files in the inputelement
.
We then use an if statement to be sure that a file was selected.
We then loop through the file and we append the first element of the file as the value of the key ‘photo’ which our server expects and then append to our form data.
We then call our HTTP library to post to our previously defined URL, sending the formData
as params.
At this point, if we try out our app and check the uploads
folder, we should also see that the files are being uploaded.
If you completed the above tutorial successfully, you have learned how to upload a file in Angular.
We have seen two different methods of uploading files. For those who do not like using third-party libraries, we have used the underlying form data, and for those who do not mind using plugins, we have used the ng2-file-upload
plugin by Valor.
Code without tests is broken as designed. — Jacob Kaplan-Moss
In software development, testing is paramount. So why should I do it, you ask?
The best way to do code testing is by using Test-Driven Development (TDD).
This is how it works:
Being a fan of best practices, we are going to use TDD to create a bucketlist API. The API will have CRUD (Create, Read, Update, Delete) and authentication capabilities. Let’s get to it then!
The aim of this article is to help you learn awesome stuff while creating new things. We’ll be creating a bucket list API. If you haven’t heard about the term bucket list, it is a list of all the goals you want to achieve, dreams you want to fulfill, and life experiences you desire to experience before you die (or hit the bucket). This API should therefore help us to create and manage our bucketlists.
To be on the same page, the API should have the ability to:
Other complementary functionalities that will come later will be:
If you haven’t done any Django, Check out Building your first Django application. It’s an excellent resource for beginners.
Now that we know about a bucketlist, here’s a bit about the tools we’ll use to create its app.
Django Rest Framework (or simply DRF) is a powerful module for building web APIs. It’s very easy to build model-backed APIs that have authentication policies and are browsable.
For DRF to work, you must have:
Create and cd into your project folder. You can call the folder anything you like.
- mkdir projects && $_
Then, create a virtual environment to isolate our code from the rest of the system.
- virtualenv -p /usr/local/bin/python3 venv
The -p switch tells virtualenv the path to the python version you want to use. Ensure that you put the correct python installation path after the -p switch. venv
is your environment. Even though you can name your environment anything, it’s considered best practice to simply name it venv
or env
.
Activate your virtual environment by doing this:
- source venv/bin/activate
You will see a prompt with the environment name (i.e., (venv)
). It means the environment is now active. Now we’re ready to install our requirements inside our environment.
Inside the projects folder, install Django using pip
- pip install Django
If you lack pip in your system, simply do:
- sudo easy_install pip
Since we need to keep track of our requirements, we’ll create a requirements.txt
file.
- touch requirements.txt
And add the installed requirements into the text file using pip freeze
- pip freeze > requirements.txt
Then finally, create a Django project.
- django-admin startproject djangorest
We should now have a folder with the name djangorest
created. Feel free to give it any other name.
The folder structure should look like this:
djangorest
├─djangorest
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
Using pip, install DRF
- pip install djangorestframework
For our app to use DRF, we’ll have to add rest_framework
into our settings.py
. Let’s go right ahead and do that.
# /djangorest/djangorest/settings.py
...
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles', # Ensure a comma ends this line
'rest_framework', # Add this line
]
In Django, we can create multiple apps that integrate to form one application. An app in Django is simply a python package with a bunch of files including the __init__.py
file.
First, cd
into the djangorest directory on your terminal. We do this so that we can access the manage.py
file. Then create the app as follows:
- python3 manage.py startapp api
The startapp
command creates a new app. Our app is called api
. It will hold our API logic. So far, you should have a folder named api
alongside the djangorest
app. To integrate our api
app with the djangorest
main app, we’ll have to add it to our djangorest settings.py
. Let’s go right ahead and do that.
...
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'api', # Add this line
]
We’d want to create the models first. But we have no tests written. We’ll therefore write some tests in the tests.py folder of our API app.
# /api/tests.py
from django.test import TestCase
from .models import Bucketlist
class ModelTestCase(TestCase):
"""This class defines the test suite for the bucketlist model."""
def setUp(self):
"""Define the test client and other test variables."""
self.bucketlist_name = "Write world class code"
self.bucketlist = Bucketlist(name=self.bucketlist_name)
def test_model_can_create_a_bucketlist(self):
"""Test the bucketlist model can create a bucketlist."""
old_count = Bucketlist.objects.count()
self.bucketlist.save()
new_count = Bucketlist.objects.count()
self.assertNotEqual(old_count, new_count)
The code above imports the test case from django.test. The test case has a single test that tests whether the model can create a bucketlist with a name.
We need to create a blank model class. This is done in our models.py
# /api/models.py
from django.db import models
class Bucketlist(models.Model):
pass
Running the test is super easy with Django. We’ll use the test command as follows:
- python3 manage.py test
You should see a bunch of errors all over the screen. Don’t worry about it. It’s because we haven’t written our model fields and updated our database yet. Django uses SQlite as its default database so we’ll use it for now. Also, we don’t have to write a single SQL statement when creating the models. Django handles all that for us. In the models.py
we’ll define fields that will represent the table fields in our database.
# api/models.py
from django.db import models
class Bucketlist(models.Model):
"""This class represents the bucketlist model."""
name = models.CharField(max_length=255, blank=False, unique=True)
date_created = models.DateTimeField(auto_now_add=True)
date_modified = models.DateTimeField(auto_now=True)
def __str__(self):
"""Return a human readable representation of the model instance."""
return "{}".format(self.name)
Migrations are Django’s way of propagating changes you make to your models (like adding a field, deleting a model, etc.) into your database schema.
Now that we have a rich model in place, we need to tell the database to create the relevant schema.
In your console, run this:
- python3 manage.py makemigrations
This creates a new migration based on the changes we’ve made to our model.
Then, apply the migrations to your DB by doing this:
- python3 manage.py migrate
When you run the tests, you should see something like this:
The tests have passed! This means that we can proceed to write the serializers for our app
Serializers serialize and deserialize data. So what’s all this about, you ask?
Serializing is changing the data from complex querysets from the DB to a form of data we can understand, like JSON or XML. Deserializing is reverting this process after validating the data we want to save to the DB.
The ModelSerializer
class lets you automatically create a Serializer class with fields that correspond to the Model fields. This reduces our lines of code significantly.
Create a file called serializers.py
inside the API directory.
Let’s write some code in it:
# api/serializers.py
from rest_framework import serializers
from .models import Bucketlist
class BucketlistSerializer(serializers.ModelSerializer):
"""Serializer to map the Model instance into JSON format."""
class Meta:
"""Meta class to map serializer's fields with the model fields."""
model = Bucketlist
fields = ('id', 'name', 'date_created', 'date_modified')
read_only_fields = ('date_created', 'date_modified')
We’ll first write the view’s tests. Writing tests seems daunting at first. However, it’s easy to know what to test when you know what to implement.
In our situation, we want to create views that will handle the following:
Based on the above functionality, we know what to test. We’ll use them as a guide.
Let’s take the first case. If we want to test whether the API will create a bucketlist successfully, we’ll write the following code in tests.py
:
# api/tests.py
# Add these imports at the top
from rest_framework.test import APIClient
from rest_framework import status
from django.core.urlresolvers import reverse
# Define this after the ModelTestCase
class ViewTestCase(TestCase):
"""Test suite for the api views."""
def setUp(self):
"""Define the test client and other test variables."""
self.client = APIClient()
self.bucketlist_data = {'name': 'Go to Ibiza'}
self.response = self.client.post(
reverse('create'),
self.bucketlist_data,
format="json")
def test_api_can_create_a_bucketlist(self):
"""Test the api has bucket creation capability."""
self.assertEqual(self.response.status_code, status.HTTP_201_CREATED)
This test fails when we run it. This is ok. It happens so because we haven’t implemented the views and URLs for handling the POST request.
Let’s go ahead and implement them! On views.py
, write the following code:
# api/views.py
from rest_framework import generics
from .serializers import BucketlistSerializer
from .models import Bucketlist
class CreateView(generics.ListCreateAPIView):
"""This class defines the create behavior of our rest api."""
queryset = Bucketlist.objects.all()
serializer_class = BucketlistSerializer
def perform_create(self, serializer):
"""Save the post data when creating a new bucketlist."""
serializer.save()
The ListCreateAPIView is a generic view that provides GET
(list all) and POST
method handlers
Notice we specified the queryset
and serializer_class
attributes. We also declare a perform_create
method that aids in saving a new bucketlist once posted.
For it to be complete, we’ll specify URLs as endpoints for consuming our API. Think of URLs as an interface to the outside world. If someone wants to interact with our web API, they’ll have to use our URL.
Create a urls.py
file on the API directory. This is where we define our URL patterns.
# api/urls.py
from django.conf.urls import url, include
from rest_framework.urlpatterns import format_suffix_patterns
from .views import CreateView
urlpatterns = {
url(r'^bucketlists/$', CreateView.as_view(), name="create"),
}
urlpatterns = format_suffix_patterns(urlpatterns)
The format_suffix_pattern
allows us to specify the data format (raw JSON or even HTML) when we use the URLs. It appends the format to be used for every URL in the pattern.
Finally, we add a URL to the main app’s urls.py
file so that it points to our API app. We will have to include the api.urls
we just declared above into the main app urlpatterns
.
Go to the djangorest
folder and add the following to the urls.py
:
# djangorest/urls.py
# This is the main urls.py. It shouldn't be mistaken for the urls.py in the API directory
from django.conf.urls import url, include
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^', include('api.urls')) # Add this line
]
We’ll run our server the Django way with the runserver
command:
- python3 manage.py runserver
You should see this output on your console
That means everything is running smoothly.
Enter the server-specified URL (http://127.0.0.1:8000/bucketlists
) in your browser. And viola – It works!
Go ahead and write a bucketlist and click the post button to confirm whether our API works.
You should see something like this:
We’ll write three more tests to cater for GET
, PUT
and DELETE
requests. We’ll wrap them as follows:
# api/tests.py
def test_api_can_get_a_bucketlist(self):
"""Test the api can get a given bucketlist."""
bucketlist = Bucketlist.objects.get()
response = self.client.get(
reverse('details',
kwargs={'pk': bucketlist.id}), format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(response, bucketlist)
def test_api_can_update_bucketlist(self):
"""Test the api can update a given bucketlist."""
change_bucketlist = {'name': 'Something new'}
res = self.client.put(
reverse('details', kwargs={'pk': bucketlist.id}),
change_bucketlist, format='json'
)
self.assertEqual(res.status_code, status.HTTP_200_OK)
def test_api_can_delete_bucketlist(self):
"""Test the api can delete a bucketlist."""
bucketlist = Bucketlist.objects.get()
response = self.client.delete(
reverse('details', kwargs={'pk': bucketlist.id}),
format='json',
follow=True)
self.assertEquals(response.status_code, status.HTTP_204_NO_CONTENT)
If we run these tests, they should fail. Let’s fix that.
It’s time we complete the API with a PUT
and DELETE
method handlers.
We’ll define a view class for this. On the views.py
file, add the following code:
# api/views.py
class DetailsView(generics.RetrieveUpdateDestroyAPIView):
"""This class handles the http GET, PUT and DELETE requests."""
queryset = Bucketlist.objects.all()
serializer_class = BucketlistSerializer
RetrieveUpdateDestroyAPIView is a generic view that provides GET
(one), PUT
, PATCH
, and DELETE
method handlers.
Finally, we create this new URL to be associated with our DetailsView.
# api/urls.py
from .views import DetailsView
url(r'^bucketlists/(?P<pk>[0-9]+)/$',
DetailsView.as_view(), name="details"),
Enter the specified URL (http://127.0.0.1:8000/bucketlists/1/
) in your browser. And voila! – It works! You can now edit the existing bucketlist.
Phew! Congratulations for making it to the end of this article – You are awesome!
In Part 2 of the Series, we’ll delve into adding users, integrating authorization and authentication, documenting the API, and adding more refined tests.
Want to dig deeper? Feel free to read more of DRF’s Official Documentation.
And if you’re new to Django, I find Django For Beginners excellent.
]]>When starting a new project, there are two staple JavaScript UI components that you will likely require. The first is a carousel, which I’ve already taken care of with Slick. And the second is a modal. Today we are going to build out a flexible CSS3 modal plugin.
Here’s a demo to see what we’ll be building:
http://codepen.io/kenwheeler/pen/LvGjK
The difference between building a plugin and a project component lies in flexibility. The first thing we are going to do is take a step back and think about the requirements. Our modal plugin should:
See that last line? That’s right folks, we’re doing this in plain old JavaScript.
Alright, let’s get our hands dirty. Our first order of business is going to be deciding on our plugin architecture and picking a design pattern. Let’s create an IIFE to create a closure we can work within. Closures can be leveraged to create a private scope, where you have control over what data you make available.
// Create an immediately invoked functional expression to wrap our code
(function() {
var privateVar = "You can't access me in the console"
}());
We want to add a constructor method for our plugin, and expose it as public. Our IIFE is called globally, so our this
keyword is pointing at the window
. Let’s attach our constructor to the global scope using this
.
// Create an immediately invoked functional expression to wrap our code
(function() {
// Define our constructor
this.Modal = function() {
}
}());
Pointing our Modal
variable at a function creates a functional object, which can now be instantiated with the new
keyword like so:
var myModal = new Modal();
This creates a new instance of our object. Unfortunately, our object doesn’t do much at this point, so lets do something about that.
Taking a look back at our requirements, our first order of business is to allow user defined options. The way we are going to achieve this is to create a set of default options, and then merge it with the object the user provides.
// Create an immediately invoked functional expression to wrap our code
(function() {
// Define our constructor
this.Modal = function() {
// Create global element references
this.closeButton = null;
this.modal = null;
this.overlay = null;
// Define option defaults
var defaults = {
className: 'fade-and-drop',
closeButton: true,
content: "",
maxWidth: 600,
minWidth: 280,
overlay: true
}
// Create options by extending defaults with the passed in arugments
if (arguments[0] && typeof arguments[0] === "object") {
this.options = extendDefaults(defaults, arguments[0]);
}
}
// Utility method to extend defaults with user options
function extendDefaults(source, properties) {
var property;
for (property in properties) {
if (properties.hasOwnProperty(property)) {
source[property] = properties[property];
}
}
return source;
}
}());
Pause. What’s going on here? First, we create global element references. These are important so that we can reference pieces of the Modal from anywhere in our plugin. Next up, we add a default options object. If a user doesn’t provide options, we use these. If they do, we override them. So how do we know if they have provided options? The key here is in the arguments
object. This is a magical object inside of every function that contains an array of everything passed to it via arguments. Because we are only expecting one argument, an object containing plugin settings, we check to make sure arguments[0]
exists, and that it is indeed an object.
If that condition passes, we then merge the two objects using a privately scoped utility method called extendDefaults
. extendDefaults
takes an object, loops through its properties, and if it isn’t an internal property(hasOwnProperty
), assigns it to the source object. We can now configure our plugin with an options object:
var myModal = new Modal({
content: 'Howdy',
maxWidth: 600
});
Now that we have our Modal
object, and it’s configurable, how about adding a public method? The first thing a developer is going to want to do with a modal is to open it up, so let’s make it happen.
// Create an immediately invoked functional expression to wrap our code
(function() {
// Define our constructor
this.Modal = function() {
...
}
// Public Methods
Modal.prototype.open = function() {
// open code goes here
}
// Private Methods
// Utility method to extend defaults with user options
function extendDefaults(source, properties) {
...
}
}());
In order to expose a public method, we attach it to our Modal
object’s prototype. When you add methods to the object’s prototype, each new instance shares the same methods, rather than creating new methods for each instance. This is super performant, unless you have multi-level subclassing, in which traversing the prototype chain negates your performance boost. We have also added comments and structured our component so that we have three sections: constructor, public methods, and private methods.
This doesn’t do anything functionally, but it keeps everything organized and readable.
How about we take a step back? We now have a nice plugin architecture, with a constructor, options & a public method. It is time for the bread and butter, so let’s revisit what this plugin is supposed to do. Our plugin should:
className
option to the modal.closeButton
option is true, add a close button.content
option is a HTML string, set it as the modal’s content.content
option is a domNode, set it’s interior content as the modal’s content.maxWidth
and minWidth
respectively.overlay
option is true.scotch-open
class that we can use with our CSS to define an open state.scotch-open
class.scotch-anchored
class so that we can handle that scenario.We can’t have a modal plugin without building a modal, so let’s create a private method that constructs a modal using our defined options:
function buildOut() {
var content, contentHolder, docFrag;
/*
* If content is an HTML string, append the HTML string.
* If content is a domNode, append its content.
*/
if (typeof this.options.content === "string") {
content = this.options.content;
} else {
content = this.options.content.innerHTML;
}
// Create a DocumentFragment to build with
docFrag = document.createDocumentFragment();
// Create modal element
this.modal = document.createElement("div");
this.modal.className = "scotch-modal " + this.options.className;
this.modal.style.minWidth = this.options.minWidth + "px";
this.modal.style.maxWidth = this.options.maxWidth + "px";
// If closeButton option is true, add a close button
if (this.options.closeButton === true) {
this.closeButton = document.createElement("button");
this.closeButton.className = "scotch-close close-button";
this.closeButton.innerHTML = "×";
this.modal.appendChild(this.closeButton);
}
// If overlay is true, add one
if (this.options.overlay === true) {
this.overlay = document.createElement("div");
this.overlay.className = "scotch-overlay " + this.options.classname;
docFrag.appendChild(this.overlay);
}
// Create content area and append to modal
contentHolder = document.createElement("div");
contentHolder.className = "scotch-content";
contentHolder.innerHTML = content;
this.modal.appendChild(contentHolder);
// Append modal to DocumentFragment
docFrag.appendChild(this.modal);
// Append DocumentFragment to body
document.body.appendChild(docFrag);
}
We start by getting our target content and creating a Document Fragment. A Document Fragment is used to construct collections of DOM elements outside of the DOM, and is used to cumulatively add what we have built to the DOM. If our content
is a string, we set our content variable to the option value. If our content
is a domNode, we set our content variable to it’s interior HTML via innerHTML
.
Next up, we create our actual modal element, and add our className
and minWidth/maxWidth
properties to it. We create it with a default scotch-modal
class for initial styling. Then, based upon option values, conditionally create a close button and an overlay in the same fashion.
Finally, we add our content to a content holder div, and append it to our modal element. After appending our modal to the Document Fragment and appending our Document Fragment to the body, we now have a built modal on the page!
This modal (hopefully) isn’t going to close itself, so providing we have a close button and/or an overlay, we need to bind events to them to make the magic happen. Below, we create a method to attach these events:
function initializeEvents() {
if (this.closeButton) {
this.closeButton.addEventListener('click', this.close.bind(this));
}
if (this.overlay) {
this.overlay.addEventListener('click', this.close.bind(this));
}
}
We attach our events using the addEventListener
method, passing a callback to a method we haven’t created yet called close
. Notice we don’t just call close, but we use the bind
method and pass our reference to this
, which references our Modal
object. This makes sure that our method has the right context when using the this
keyword.
Let’s head back to the public open
method we created earlier. Time to make it shine:
Modal.prototype.open = function() {
// Build out our Modal
buildOut.call(this);
// Initialize our event listeners
initializeEvents.call(this);
/*
* After adding elements to the DOM, use getComputedStyle
* to force the browser to recalc and recognize the elements
* that we just added. This is so that CSS animation has a start point
*/
window.getComputedStyle(this.modal).height;
/*
* Add our open class and check if the modal is taller than the window
* If so, our anchored class is also applied
*/
this.modal.className = this.modal.className +
(this.modal.offsetHeight > window.innerHeight ?
" scotch-open scotch-anchored" : " scotch-open");
this.overlay.className = this.overlay.className + " scotch-open";
}
When opening our modal, we first have to build it. We call our buildOut
method using the call
method, similarly to the way we did in our event binding with bind
. We are simply passing the proper value of this
to the method. We then call initializeEvents
to make sure any applicable events get bound. Now, I know you are saying to yourself, what is going on with getComputedStyle
? Check this out: We are using CSS3 for our transitions.
The modal hides and shows based upon applied class names. When you add an element to the DOM and then add a class, the browser might not have interpreted the initial style, so you never see a transition from its initial state. That’s where window.getComputedStyle
comes into play. Calling this forces the browser to recognize our initial state, and keeps our modal transition looking mint. Lastly, we add the scotch-open
class name.
But that’s not all. We currently have our modal centered, but if its height exceeds the viewport, that’s gonna look silly. We use a ternary operator to check the heights, and if our modal is too tall, we also add the scotch-anchored
class name, to handle this situation.
Like anything else that is completely amazing in this world, at some point our modal must come to an end. So let’s build a method to send it to the other side:
Modal.prototype.close = function() {
// Store the value of this
var _ = this;
// Remove the open class name
this.modal.className = this.modal.className.replace(" scotch-open", "");
this.overlay.className = this.overlay.className.replace(" scotch-open",
"");
/*
* Listen for CSS transitionend event and then
* Remove the nodes from the DOM
*/
this.modal.addEventListener(this.transitionEnd, function() {
_.modal.parentNode.removeChild(_.modal);
});
this.overlay.addEventListener(this.transitionEnd, function() {
if(_.overlay.parentNode) _.overlay.parentNode.removeChild(_.overlay);
});
}
In order to have our modal transition out, we remove its scotch-open
class name. The same applies to our overlay. But we aren’t finished yet. We have to remove our modal from the DOM, but it’s going to look ridiculous if we don’t wait until our animation has completed. We accomplish this by attaching an event listener to detect when our transition is complete, and when it is, it’s “Peace out, cub scout”. You may be wondering where this.transitionEnd
came from. I’ll tell you. Browsers have different event names for transitions ending, so I wrote a method to detect which one to use, and called it in the constructor. See below:
// Utility method to determine which transistionend event is supported
function transitionSelect() {
var el = document.createElement("div");
if (el.style.WebkitTransition) return "webkitTransitionEnd";
if (el.style.OTransition) return "oTransitionEnd";
return 'transitionend';
}
this.Modal = function() {
...
// Determine proper prefix
this.transitionEnd = transitionSelect();
....
}
And there you have it. We have built out our modal javascript plugin. Comments and spacing aside,
we did it in 100 lines of pure, sweet Vanilla JavaScript. Check out our finished product below, and then get ready to talk CSS:
// Create an immediately invoked functional expression to wrap our code
(function() {
// Define our constructor
this.Modal = function() {
// Create global element references
this.closeButton = null;
this.modal = null;
this.overlay = null;
// Determine proper prefix
this.transitionEnd = transitionSelect();
// Define option defaults
var defaults = {
className: 'fade-and-drop',
closeButton: true,
content: "",
maxWidth: 600,
minWidth: 280,
overlay: true
}
// Create options by extending defaults with the passed in arugments
if (arguments[0] && typeof arguments[0] === "object") {
this.options = extendDefaults(defaults, arguments[0]);
}
}
// Public Methods
Modal.prototype.close = function() {
var _ = this;
this.modal.className = this.modal.className.replace(" scotch-open", "");
this.overlay.className = this.overlay.className.replace(" scotch-open",
"");
this.modal.addEventListener(this.transitionEnd, function() {
_.modal.parentNode.removeChild(_.modal);
});
this.overlay.addEventListener(this.transitionEnd, function() {
if(_.overlay.parentNode) _.overlay.parentNode.removeChild(_.overlay);
});
}
Modal.prototype.open = function() {
buildOut.call(this);
initializeEvents.call(this);
window.getComputedStyle(this.modal).height;
this.modal.className = this.modal.className +
(this.modal.offsetHeight > window.innerHeight ?
" scotch-open scotch-anchored" : " scotch-open");
this.overlay.className = this.overlay.className + " scotch-open";
}
// Private Methods
function buildOut() {
var content, contentHolder, docFrag;
/*
* If content is an HTML string, append the HTML string.
* If content is a domNode, append its content.
*/
if (typeof this.options.content === "string") {
content = this.options.content;
} else {
content = this.options.content.innerHTML;
}
// Create a DocumentFragment to build with
docFrag = document.createDocumentFragment();
// Create modal element
this.modal = document.createElement("div");
this.modal.className = "scotch-modal " + this.options.className;
this.modal.style.minWidth = this.options.minWidth + "px";
this.modal.style.maxWidth = this.options.maxWidth + "px";
// If closeButton option is true, add a close button
if (this.options.closeButton === true) {
this.closeButton = document.createElement("button");
this.closeButton.className = "scotch-close close-button";
this.closeButton.innerHTML = "×";
this.modal.appendChild(this.closeButton);
}
// If overlay is true, add one
if (this.options.overlay === true) {
this.overlay = document.createElement("div");
this.overlay.className = "scotch-overlay " + this.options.className;
docFrag.appendChild(this.overlay);
}
// Create content area and append to modal
contentHolder = document.createElement("div");
contentHolder.className = "scotch-content";
contentHolder.innerHTML = content;
this.modal.appendChild(contentHolder);
// Append modal to DocumentFragment
docFrag.appendChild(this.modal);
// Append DocumentFragment to body
document.body.appendChild(docFrag);
}
function extendDefaults(source, properties) {
var property;
for (property in properties) {
if (properties.hasOwnProperty(property)) {
source[property] = properties[property];
}
}
return source;
}
function initializeEvents() {
if (this.closeButton) {
this.closeButton.addEventListener('click', this.close.bind(this));
}
if (this.overlay) {
this.overlay.addEventListener('click', this.close.bind(this));
}
}
function transitionSelect() {
var el = document.createElement("div");
if (el.style.WebkitTransition) return "webkitTransitionEnd";
if (el.style.OTransition) return "oTransitionEnd";
return 'transitionend';
}
}());
This is a CSS3 modal, so JavaScript is only half the battle. To recap, we have a base class on our modal of scotch-modal
, and an open class of scotch-open
. Modals that exceed the viewport height have a class of scotch-anchored
, and potentially have an overlay(scotch-overlay
) and a close button(scotch-close
). Let’s apply some base styles:
/* Modal Base CSS */
.scotch-overlay
{
position: fixed;
z-index: 9998;
top: 0;
left: 0;
opacity: 0;
width: 100%;
height: 100%;
-webkit-transition: 1ms opacity ease;
-moz-transition: 1ms opacity ease;
-ms-transition: 1ms opacity ease;
-o-transition: 1ms opacity ease;
transition: 1ms opacity ease;
background: rgba(0,0,0,.6);
}
.scotch-modal
{
position: absolute;
z-index: 9999;
top: 50%;
left: 50%;
opacity: 0;
width: 94%;
padding: 24px 20px;
-webkit-transition: 1ms opacity ease;
-moz-transition: 1ms opacity ease;
-ms-transition: 1ms opacity ease;
-o-transition: 1ms opacity ease;
transition: 1ms opacity ease;
-webkit-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-ms-transform: translate(-50%, -50%);
-o-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
border-radius: 2px;
background: #fff;
}
.scotch-modal.scotch-open.scotch-anchored
{
top: 20px;
-webkit-transform: translate(-50%, 0);
-moz-transform: translate(-50%, 0);
-ms-transform: translate(-50%, 0);
-o-transform: translate(-50%, 0);
transform: translate(-50%, 0);
}
.scotch-modal.scotch-open
{
opacity: 1;
}
.scotch-overlay.scotch-open
{
opacity: 1;
}
/* Close Button */
.scotch-close
{
font-family: Helvetica,Arial,sans-serif;
font-size: 24px;
font-weight: 700;
line-height: 12px;
position: absolute;
top: 5px;
right: 5px;
padding: 5px 7px 7px;
cursor: pointer;
color: #fff;
border: 0;
outline: none;
background: #e74c3c;
}
.scotch-close:hover
{
background: #c0392b;
}
In a nutshell, we are making our modal and overlay just appear by default. We leave a 1ms transition on by default so that we can be sure that our transitionend
event actually fires. We use the translate centering method to vertically and horizontally center our modal in the window. If scotch-anchored
is applied, we center horizontally and anchor our modal 20px
from the top of the window.
This is a great starting base for adding custom animations via the className
option, so why don’t we go ahead create a custom animation for the fade-and-drop
default className
of our plugin:
/* Default Animation */
.scotch-overlay.fade-and-drop
{
display: block;
opacity: 0;
}
.scotch-modal.fade-and-drop
{
top: -300%;
opacity: 1;
display: block;
}
.scotch-modal.fade-and-drop.scotch-open
{
top: 50%;
-webkit-transition: 500ms top 500ms ease;
-moz-transition: 500ms top 500ms ease;
-ms-transition: 500ms top 500ms ease;
-o-transition: 500ms top 500ms ease;
transition: 500ms top 500ms ease;
}
.scotch-modal.fade-and-drop.scotch-open.scotch-anchored
{
-webkit-transition: 500ms top 500ms ease;
-moz-transition: 500ms top 500ms ease;
-ms-transition: 500ms top 500ms ease;
-o-transition: 500ms top 500ms ease;
transition: 500ms top 500ms ease;
}
.scotch-overlay.fade-and-drop.scotch-open
{
top: 0;
opacity: 1;
-webkit-transition: 500ms opacity ease;
-moz-transition: 500ms opacity ease;
-ms-transition: 500ms opacity ease;
-o-transition: 500ms opacity ease;
transition: 500ms opacity ease;
}
.scotch-modal.fade-and-drop
{
-webkit-transition: 500ms top ease;
-moz-transition: 500ms top ease;
-ms-transition: 500ms top ease;
-o-transition: 500ms top ease;
transition: 500ms top ease;
}
.scotch-overlay.fade-and-drop
{
-webkit-transition: 500ms opacity 500ms ease;
-moz-transition: 500ms opacity 500ms ease;
-ms-transition: 500ms opacity 500ms ease;
-o-transition: 500ms opacity 500ms ease;
transition: 500ms opacity 500ms ease;
}
For our fade-and-drop
transition, we want the overlay to fade in, and the modal to drop in. We utilize the delay
argument of the transition
property shorthand to wait 500ms until the overlay has faded in. For our outro transition, we want the modal to fly back up out of sight and then fade the overlay out. Again, we use the delay property to wait for the modal animation to complete.
Now we have a fully working modal plugin. Woo! So how do we actually use it? Using the new
keyword, we can create a new modal and assign it to a variable:
var myModal = new Modal();
myModal.open();
Without the content
option set, it is going to be a pretty lame modal, so lets go ahead and pass in some options:
var myModal = new Modal({
content: '<p>Modals rock!</p>',
maxWidth: 600
});
myModal.open();
What about if we want to set up a custom animation? We should add a class to the className option that we can style with:
var myModal = new Modal({
className: 'custom-animation',
content: '<p>Modals rock!</p>',
maxWidth: 600
});
myModal.open();
and then in our CSS, reference it and do your thing:
.scotch-modal.custom-animation {
-webkit-transition: 1ms -webkit-transform ease;
-moz-transition: 1ms -moz-transform ease;
-ms-transition: 1ms -ms-transform ease;
-o-transition: 1ms -o-transform ease;
transition: 1ms transform ease;
-webkit-transform: scale(0);
-moz-transform: scale(0);
-ms-transform: scale(0);
-o-transform: scale(0);
transform: scale(0);
}
.scotch-modal-custom-animation.scotch-open {
-webkit-transform: scale(1);
-moz-transform: scale(1);
-ms-transform: scale(1);
-o-transform: scale(1);
transform: scale(1);
}
I know what you’re thinking:
“Ken, what if we want to add new features to our plugin?”
By now if you haven’t realized it, I’ll spill the beans. This article isn’t about writing a modal, it’s about writing a plugin. If you have been following along you should have the tools required to do just this. Say you want to make the plugin open automatically when instantiated. Let’s add an option for that. First, we add the option to our defaults in our constructor method.
// Define our constructor
this.Modal = function() {
// Create global element references
this.closeButton = null;
this.modal = null;
this.overlay = null;
// Define option defaults
var defaults = {
autoOpen: false,
className: 'fade-and-drop',
closeButton: true,
content: "",
maxWidth: 600,
minWidth: 280,
overlay: true
}
// Create options by extending defaults with the passed in arugments
if (arguments[0] && typeof arguments[0] === "object") {
this.options = extendDefaults(defaults, arguments[0]);
}
}
Next, we check if the option is true
, and if so, fire our open method.
// Define our constructor
this.Modal = function() {
// Create global element references
this.closeButton = null;
this.modal = null;
this.overlay = null;
// Define option defaults
var defaults = {
autoOpen: false,
className: 'fade-and-drop',
closeButton: true,
content: "",
maxWidth: 600,
minWidth: 280,
overlay: true
}
// Create options by extending defaults with the passed in arugments
if (arguments[0] && typeof arguments[0] === "object") {
this.options = extendDefaults(defaults, arguments[0]);
}
if (this.options.autoOpen === true) this.open();
}
It is as simple as that.
I sincerely hope that after reading this, everyone learned something they didn’t know before. I personally learned a number of things during the course of writing it! We now have a fully functioning CSS3 modal plugin, but don’t stop here. Make it yours. Add features that you think would be helpful, craft some custom transitions, go absolutely bananas. You don’t even need to make a better modal. Take the plugin-building skills you acquired here today, and go build a brand new plugin. Who knows, it could be the next big thing!
If you have made it this far, I appreciate your time and diligence and I look forward to writing more fun tutorials here in the future. In the meantime, check out some cool demos below!
http://codepen.io/kenwheeler/pen/LvGjK
]]>Making HTTP requests is a vital operation in the life of most front-end applications. Angular 2, which is the hottest thing right now has a really cool way of doing that. Actually, that is what we are going to cover together today in this tutorial. We will learn how how to make HTTP requests using RxJs Observable library.
We will create a comments app. Here’s a demo and a quick look:
And a couple of screenshots for the final app:
Observables are similar to Promises but with major differences that make them better.
The Observable proposal is in stage 1 so there is a chance for native support in the future. Observables are similar to Promises but with major differences that make them better. The key differences are:
Observables | Promise |
---|---|
Observables handle multiple values over time | Promises are only called once and will return a single value |
Observables are cancellable | Promises are not cancellable |
The ability of Observables being able to handle multiple values over time makes them a good candidate for working with real-time data, events, and any sort of stream you can think of.
Being able to cancel Observables gives better control when working with the in-flow of values from a stream. The common example is the auto-complete widget which sends a request for every key-stroke.
If you are searching for angular
in an auto-complete, the first request is with a
and then an
. The scary thing is that an
might come back with a response before a
which produces messy data. With Observables, you have better control to hook in and cancel a
’s because an
is coming through.
Observables is an ES7 feature which means you need to make use of an external library to use it today. RxJS is a good one. RxJS also provides Observable operators which you can use to manipulate the data being emitted. Some of these operators are:
Above is a list of popular operators you will encounter in most projects but those are not all. See RxMarbles for more.
Hopefully, you have seen what Observables are capable of. The good news is, you can also use Observables to handle HTTP requests rather than Promises. I understand you might have started in the days when callbacks were the hot thing when handling XHR, then a couple of years back you got the news that callbacks were now a bad practice you had to use Promises. Now again, we’re hearing that we should use Observables rather than Promises.
Angular and Angular 2 are amazing now you are hearing that you should use Observables rather than Promises. That is a general tech challenge and we just have to get used to change and growth to build better and cooler stuff. Trust me you won’t regret this one.
The rest of this article will focus on building a demo that uses Observables to handle HTTP requests.
Angular Quickstart is a good boilerplate for a basic Angular project and we should be fine with that. Clone the repository and install all its dependencies:
- # Clone repo
- git clone https://github.com/angular/quickstart scotch-http
-
- # Enter into directory
- cd scotch-http
-
- # Install dependencies
- npm install
That gives a good platform to get our hands dirty.
The demo repository which is provided has a server
folder that serves API endpoints for our application. Building these API endpoints is beyond this scope but it’s a basic Node application built with ES6 but transpiled with Babel. When you clone the demo, run the following to start the server:
- # Move in to server project folder
- cd server
-
- # Install dependencies
- npm install
-
- # Run
- npm start
Before moving on to building something, let’s have a birds-eye view of what the structure of our application will look like:
- |----app
- |------Comments
- |--------Components
- |----------comment-box.component.ts # Box
- |----------comment-form.component.ts # Form
- |----------comment-list.component.ts # List
- |----------index.ts # Comment componens curator
- |--------Model
- |----------comment.ts # Comment Model (Interface/Structure)
- |--------Services
- |----------comment.service.ts # HTTP service
- |--------comment.module.ts # Comment Module
- |------app.component.ts # Entry
- |------app.module.ts # Root Module
- |------emitter.service.ts #Utility service for component interaction
- |------main.ts # Bootstrapper
Web components are awesome but their hierarchical nature makes them quite tricky to manage. Some components are so dumb that all they can do is receive data and spread the data in a view or emit events.
This might sound simple because these kinds of components can just receive data from their parent component which could be a smarter component that knows how to handle data. In Angular, data is passed from parent to child using Input.
Another scenario is when there is a change in the child component and the parent component needs to be notified about the change. The keyword is notify
which means the child will raise an event that the parent is listening to. This is done with Output in Angular.
The actual pain is when siblings or cousins need to notify each other of internal changes. Angular does not provide a core solution for this but there are solutions. The most common way is to have a central event hub that keeps track of events using an ID
:
// Credit to https://gist.github.com/sasxa
// Imports
import {Injectable, EventEmitter} from '@angular/core';
@Injectable()
export class EmitterService {
// Event store
private static _emitters: { [ID: string]: EventEmitter<any> } = {};
// Set a new event in the store with a given ID
// as key
static get(ID: string): EventEmitter<any> {
if (!this._emitters[ID])
this._emitters[ID] = new EventEmitter();
return this._emitters[ID];
}
}
All this does is register events in an _emitters
object and emits them when they are called using the get()
method.
The actual trick is to set these IDs in a parent or grand-parent container and pass the IDs around to each child and grandchild that needs to notify a parent then use ngOnChanges
lifecycle method to listen to when the id is poked. You can then subscribe to the emitted event in ngOnChanges
.
Sounds twisted? We will clarify down the road.
Before we create the components, let’s do what we have come here for and what we have been waiting for. Below is the HTTP signature as is in the Angular 2 source:
/**
* Performs any type of http request. First argument is required, and can either be a url or
* a {@link Request} instance. If the first argument is a url, an optional {@link RequestOptions}
* object can be provided as the 2nd argument. The options object will be merged with the values
* of {@link BaseRequestOptions} before performing the request.
*/
request(url: string | Request, options?: RequestOptionsArgs): Observable<Response>;
/**
* Performs a request with `get` http method.
*/
get(url: string, options?: RequestOptionsArgs): Observable<Response>;
/**
* Performs a request with `post` http method.
*/
post(url: string, body: any, options?: RequestOptionsArgs): Observable<Response>;
/**
* Performs a request with `put` http method.
*/
put(url: string, body: any, options?: RequestOptionsArgs): Observable<Response>;
/**
* Performs a request with `delete` http method.
*/
delete(url: string, options?: RequestOptionsArgs): Observable<Response>;
/**
* Performs a request with `patch` http method.
*/
patch(url: string, body: any, options?: RequestOptionsArgs): Observable<Response>;
/**
* Performs a request with `head` http method.
*/
head(url: string, options?: RequestOptionsArgs): Observable<Response>;
Each method takes in a URL and a payload as the case may be and returns a generic Observable response type. We are only interested in post
, put
, get
, delete
for this tutorial but the above shows what more you can try out.
The service class has the following structure:
// Imports
import { Injectable } from '@angular/core';
import { Http, Response, Headers, RequestOptions } from '@angular/http';
import { Comment } from '../model/comment';
import { Observable } from 'rxjs/Rx';
// Import RxJs required methods
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
@Injectable()
export class CommentService {
// Resolve HTTP using the constructor
constructor (private http: Http) {}
// private instance variable to hold base url
private commentsUrl = 'http://localhost:3000/api/comments';
}
We are importing the required libraries for our service to behave as expected. Notice that the Observable we spoke about has also been imported and ready for use. The map
and catch
Observable operators which will help us manipulate data and handle errors respectively has also been imported. Then we inject HTTP
in the constructor and keep a reference to the base url of our API.
// Fetch all existing comments
getComments() : Observable<Comment[]> {
// ...using get request
return this.http.get(this.commentsUrl)
// ...and calling .json() on the response to return data
.map((res:Response) => res.json())
//...errors if any
.catch((error:any) => Observable.throw(error.json().error || 'Server error'));
}
Using the http
instance we already have on the class, we call it’s get
method passing in the base URL because that is the endpoint where we can find a list of comments.
We are maintaining strictness by ensuring that the service instance methods always return an Observable of type Comment
:
export class Comment {
constructor(
public id: Date,
public author: string,
public text:string
){}
}
With the map operator, we call the .json
method on the response because the actual response is not a collection of data but a JSON string.
Note: Angular 4.3 uses JSON response by default. Therefore, you can get rid of that line if you are using the latest version of Angular.
It is always advisable to handle errors so we can use the catch operator to return another subscribable Observable but this time a failed one.
The rest of the code has the above structure but different HTTP methods and arguments:
// Add a new comment
addComment (body: Object): Observable<Comment[]> {
let bodyString = JSON.stringify(body); // Stringify payload
let headers = new Headers({ 'Content-Type': 'application/json' }); // ... Set content type to JSON
let options = new RequestOptions({ headers: headers }); // Create a request option
return this.http.post(this.commentsUrl, body, options) // ...using post request
.map((res:Response) => res.json()) // ...and calling .json() on the response to return data
.catch((error:any) => Observable.throw(error.json().error || 'Server error')); //...errors if any
}
// Update a comment
updateComment (body: Object): Observable<Comment[]> {
let bodyString = JSON.stringify(body); // Stringify payload
let headers = new Headers({ 'Content-Type': 'application/json' }); // ... Set content type to JSON
let options = new RequestOptions({ headers: headers }); // Create a request option
return this.http.put(`${this.commentsUrl}/${body['id']}`, body, options) // ...using put request
.map((res:Response) => res.json()) // ...and calling .json() on the response to return data
.catch((error:any) => Observable.throw(error.json().error || 'Server error')); //...errors if any
}
// Delete a comment
removeComment (id:string): Observable<Comment[]> {
return this.http.delete(`${this.commentsUrl}/${id}`) // ...using put request
.map((res:Response) => res.json()) // ...and calling .json() on the response to return data
.catch((error:any) => Observable.throw(error.json().error || 'Server error')); //...errors if any
}
The above makes a post
, put
and delete
request, converts responses to JSON, and catches error if any. Now you see, Observables are not as mouthful as they seemed in the beginning. What’s is just left to do is subscribe to the Observable and bind the data as they are emitted to the views. Let’s build our components.
Time to tie things together. With the emitter and data service down, we can now build components that tie both together to make a usable application.
The comment box is the heart of our application. It holds the primitive details which include the comment author and comment text:
// Imports
import { Component, Input, Output, EventEmitter } from '@angular/core';
import { Comment } from '../model/comment'
import { EmitterService } from '../../emitter.service';
import { CommentService } from '../services/comment.service';
// Component decorator
@Component({
selector: 'comment-box',
template: `
<!-- Removed for brevity 'ssake -->
`
// No providers here because they are passed down from the parent component
})
// Component class
export class CommentBoxComponent {
// Constructor
constructor(
private commentService: CommentService
){}
// Define input properties
@Input() comment: Comment;
@Input() listId: string;
@Input() editId:string;
editComment() {
// Emit edit event
EmitterService.get(this.editId).emit(this.comment);
}
deleteComment(id:string) {
// Call removeComment() from CommentService to delete comment
this.commentService.removeComment(id).subscribe(
comments => {
// Emit list event
EmitterService.get(this.listId).emit(comments);
},
err => {
// Log errors if any
console.log(err);
});
}
}
The comment
property which is decorated with @Input
holds data passed from a parent component to the comment box component. With that, we can access the author and text properties to be displayed on the view. The two methods, editComment
and deleteComment
as their name implies, loads the form with a comment to update or removes a comment respectively.
The editComment
emits an edit
comment which is tracked by the Input Id. You could already guess that a comment-form
component is listening to this event. The deleteComment
calls the removeComment
on the CommentService
instance to delete a comment. Once that is successful it emits a list
event for the comment-list
component to refresh its data
A payload is being passed into the events which the subscriber can get hold of. We must not pass in the actual data, rather we can use a simple flag that a change has been made and then fetch the data using the respective component
<div class="panel panel-default">
<div class="panel-heading">{{comment.author}}</div>
<div class="panel-body">
{{comment.text}}
</div>
<div class="panel-footer">
<button class="btn btn-info" (click)="editComment()"><span class="glyphicon glyphicon-edit"></span></button>
<button class="btn btn-danger" (click)="deleteComment(comment.id)"><span class="glyphicon glyphicon-remove"></span></button>
</div>
</div>
Use buttons to bind edit and delete comment events to the view. The above snippet was removed from comment-box component for brevity
The comment form will consist of a text box for the author, a textarea for the text and a button to submit changes:
<form (ngSubmit)="submitComment()">
<div class="form-group">
<div class="input-group">
<span class="input-group-addon" id="basic-addon1"><span class="glyphicon glyphicon-user"></span></span>
<input type="text" class="form-control" placeholder="Author" [(ngModel)]="model.author" name="author">
</div>
<br />
<textarea class="form-control" rows="3" placeholder="Text" [(ngModel)]="model.text" name="text"></textarea>
<br />
<button *ngIf="!editing" type="submit" class="btn btn-primary btn-block">Add</button>
<button *ngIf="editing" type="submit" class="btn btn-warning btn-block">Update</button>
</div>
</form>
There are two buttons actually but one can be displayed at a time and the other hidden. This behavior is common. We are just switching between edit mode or create mode.
// Imports
import { Component, EventEmitter, Input, OnChanges } from '@angular/core';
import { NgForm } from '@angular/forms';
import { Observable } from 'rxjs/Rx';
import { CommentBoxComponent } from './comment-box.component'
import { CommentService } from '../services/comment.service';
import { EmitterService } from '../../emitter.service';
import { Comment } from '../model/comment'
// Component decorator
@Component({
selector: 'comment-form',
template: `
<!-- Removed for brevity, included above -->
`
})
// Component class
export class CommentFormComponent implements OnChanges {
// Constructor with injected service
constructor(
private commentService: CommentService
){}
// Local properties
private model = new Comment(new Date(), '', '');
private editing = false;
// Input properties
@Input() editId: string;
@Input() listId: string;
submitComment(){
// Variable to hold a reference of addComment/updateComment
let commentOperation:Observable<Comment[]>;
if(!this.editing){
// Create a new comment
commentOperation = this.commentService.addComment(this.model)
} else {
// Update an existing comment
commentOperation = this.commentService.updateComment(this.model)
}
// Subscribe to observable
commentOperation.subscribe(
comments => {
// Emit list event
EmitterService.get(this.listId).emit(comments);
// Empty model
this.model = new Comment(new Date(), '', '');
// Switch editing status
if(this.editing) this.editing = !this.editing;
},
err => {
// Log errors if any
console.log(err);
});
}
ngOnChanges() {
// Listen to the 'edit'emitted event so as populate the model
// with the event payload
EmitterService.get(this.editId).subscribe((comment:Comment) => {
this.model = comment
this.editing = true;
});
}
}
There is a model
property to keep track of data in the form. The model changes depending on the state of the application. When creating a new comment, it’s empty but when editing it is filled with the data to edit.
The ngOnChanges
method is responsible for toggling to edit mode by setting the editing
property to true after it has loaded the model
property with a comment to update.
This comment is fetched by subscribing to the edit
event we emitted previously.
Remember that
ngOnChanges
method is called when there is a change on any Input property of a component
The comment list is quite simple, it just iterates over a list comment and passes the data to the comment box:
// Imports
import { Component, OnInit, Input, OnChanges } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { Comment } from '../model/comment';
import { CommentService } from '../services/comment.service';
import { EmitterService } from '../../emitter.service';
// Component decorator
@Component({
selector: 'comment-list',
template: `
<comment-box
[editId]="editId"
[listId]="listId"
*ngFor="let comment of comments"
[comment]="comment">
</comment-box>
`
})
// Component class
export class CommentListComponent implements OnInit, OnChanges{
// Local properties
comments: Comment[];
// Input properties
@Input() listId: string;
@Input() editId: string;
// Constructor with injected service
constructor(private commentService: CommentService) {}
ngOnInit() {
// Load comments
this.loadComments()
}
loadComments() {
// Get all comments
this.commentService.getComments()
.subscribe(
comments => this.comments = comments, //Bind to view
err => {
// Log errors if any
console.log(err);
});
}
ngOnChanges(changes:any) {
// Listen to the 'list'emitted event so as populate the model
// with the event payload
EmitterService.get(this.listId).subscribe((comments:Comment[]) => { this.loadComments()});
}
}
It implements OnInit
and OnChanges
as well. By overriding ngOnInit
, we are able to load existing comments from the API and by overriding ngOnChanges
we are able to reload the comments when we delete, create or update a comment.
Notice that the event is we are subscribing to this time is a list
event that is emitted in the comment form component when a new comment is created or an existing comment is updated. It is also emitted in the comment box component when a comment is deleted.
This is one is just a curator. It gathers all the comment components and exports them for the app component to import:
// Imports
import { Component} from '@angular/core';
import { EmitterService } from '../../emitter.service';
@Component({
selector: 'comment-widget',
template: `
<div>
<comment-form [listId]="listId" [editId]="editId"></comment-form>
<comment-list [listId]="listId" [editId]="editId"></comment-list>
</div>
`,
})
export class CommentComponent {
// Event tracking properties
private listId = 'COMMENT_COMPONENT_LIST';
private editId = 'COMMENT_COMPONENT_EDIT';
}
Now you see where the properties we have been passing around originated from.
If you really perused through the codes, you would realize that comment service is not provided even though it was imported to some of the components. This is because, with the final release of Angular 2, we no longer need to do that but we can make the services available using a module to all members of the module. This does not apply to just services but all other members including components, directives, and pipes. This is how our comment module looks like:
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { FormsModule } from '@angular/forms';
import { HttpModule, JsonpModule } from '@angular/http';
import { CommentBoxComponent } from './components/comment-box.component';
import { CommentListComponent } from './components/comment-list.component';
import { CommentFormComponent } from './components/comment-form.component';
import { CommentComponent } from './components/index';
import { CommentService } from './services/comment.service';
@NgModule({
imports: [
BrowserModule,
FormsModule,
HttpModule,
JsonpModule,
],
declarations: [
CommentBoxComponent,
CommentFormComponent,
CommentListComponent,
CommentComponent
],
providers: [
CommentService
],
exports:[
CommentBoxComponent,
CommentFormComponent,
CommentListComponent,
CommentComponent
]
})
export class CommentModule {
}
We exported the components as well so they can be available not just to this module but any other module that specifies CommentModule
as in import. Our AppModule
will do that.
This is the typical entry point of an Angular 2 app. If you have an NG2 application, you would recognize it. The key difference is that we are adding a comment widget to it:
// Imports
import { Component } from '@angular/core';
import { CommentComponent } from './comments/components/index'
@Component({
selector: 'my-app',
template: `
<h1>Comments</h1>
<comment-widget></comment-widget>
`
})
export class AppComponent { }
Just as we have seen with the comment module, the app module configures our app and the major difference between the root is, comment module is a feature module and app module is a root module (used to bootstrap the application).
App module will declare comment module as an import so that comment module exports can be available to the app module members:
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { FormsModule } from '@angular/forms';
import { HttpModule, JsonpModule } from '@angular/http';
import { CommentModule } from './comments/comments.module';
import { AppComponent } from './app.component';
import { EmitterService } from './emitter.service';
@NgModule({
imports: [
BrowserModule,
FormsModule,
HttpModule,
JsonpModule,
CommentModule
],
declarations: [
AppComponent,
],
providers: [
EmitterService
],
bootstrap: [ AppComponent ]
})
export class AppModule {
}
We bootstrap the application by providing it with the root module:
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app.module';
platformBrowserDynamic().bootstrapModule(AppModule);
What our app looks like
We started with a primary goal: handling HTTP requests with Observables. Fortunately, it turned out we achieved our goal and also gained some extra knowledge about component interaction and why you should choose Observables over Promises.
]]>Note: Update: 30/03/2019
This article has been updated based on the updates to both docker
and angular
since this article was written. The current version of angular
is 7, the updates also adds an attached docker volume to the angular
client so that you don’t need to run docker-compose
build every time.
Docker allows us to run applications inside containers. These containers in most cases communicate with each other.
Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
We’ll build an angular app in one container, point it to an Express API in another container, which connects to MongoDB in another container.
If you haven’t worked with Docker before, this would be a good starting point as we will explain every step covered, in some detail.
You need to have docker and docker-compose installed in your setup. Instructions for installing docker in your given platform can be found here.
Instructions for installing docker-compose can be found here.
Verify your installation by running:
- docker -v
OutputDocker version 18.09.2, build 6247962
- docker-compose -v
Outputdocker-compose version 1.23.2, build 1110ad01
- node -v
Outputv11.12.0
Next, you need to know how to build a simple Angular app and an Express App. We’ll be using the Angular CLI to build a simple app.
We’ll now separately build out these three parts of our app. The approach we are going to take is building the app in our local environment, then dockerizing the app.
Once these are running, we’ll connect the three docker containers. Note that we are only building two containers, Angular and the Express/Node API. The third container will be from a MongoDB image that we’ll just pull from the Docker Hub.
Docker Hub is a repository for docker images. It’s where we pull down official docker images such as MongoDB, NodeJs, Ubuntu, and we can also create custom images and push them to Docker Hub for other people to pull and use.
Let’s create a directory for our whole setup, we’ll call it mean-docker
.
- mkdir mean-docker
Next, we’ll create an Angular app and make sure it runs in a docker
container.
Create a directory called angular-client
inside the mean-docker
directory we created above, and initialize an Angular App with the Angular CLI.
We’ll use npx
, a tool that allows us to run CLI apps without installing them into our system. It comes preinstalled when you install Node.js since version 5.2.0
- npx @angular/cli new angular-client
? Would you like to add Angular routing? No
? Which stylesheet format would you like to use? CSS
This scaffolds an Angular app, and npm installs the app’s dependencies. Our directory structure should be like this
└── mean-docker
└── angular-client
├── README.md
├── angular.json
├── e2e
├── node_modules
├── package.json
├── package-lock.json
├── src
├── tsconfig.json
└── tslint.json
Running npm start, inside the angular-client
directory should start the angular app at http://localhost:4200
.
To dockerize any app, we usually need to write a Dockerfile
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
To quickly brainstorm on what our angular app needs in order to run,
package.json
file has it as a dependency, so it’s not a requirement.localhost:4200
.npm start
in the container, which in turn runs ng serve
since it’s a script in the package.json
file, created from the image and our app should run.Those are the exact instructions we are going to write in our Dockerfile.
# Create image based on the official Node 10 image from dockerhub
FROM node:10
# Create a directory where our app will be placed
RUN mkdir -p /app
# Change directory so that our commands run inside this new directory
WORKDIR /app
# Copy dependency definitions
COPY package*.json /app/
# Install dependencies
RUN npm install
# Get all the code needed to run the app
COPY . /app/
# Expose the port the app runs in
EXPOSE 4200
# Serve the app
CMD ["npm", "start"]
I’ve commented on the file to show what each instruction clearly does.
Note: Before we build the image, if you are keen, you may have noticed that the line COPY . /app/
copies our whole directory into the container, including node_modules
. To ignore this, and other files that are irrelevant to our container, we can add a .dockerignore
file and list what is to be ignored. This file is usually sometimes identical to the .gitignore
file.
Create a .dockerignore
file.
node_modules/
One last thing we have to do before building the image is to ensure that the app is served from the host created by the docker
image. To ensure this, go into your package.json
and change the start
script to:
{
...
"scripts": {
"start": "ng serve --host 0.0.0.0",
...
},
...
}
To build the image we will use docker build
command. The syntax is
- docker build -t <image_tag>:<tag> <directory_with_Dockerfile>
Make sure you are in the mean_docker/angular-client
directory, then build your image.
- cd angular-client
- docker build -t angular-client:dev .
-t
is a shortform of --tag
, and refers to the name or tag given to the image to be built. In this case the tag will be angular-client:dev
.
The .
(dot) at the end refers to the current directory. Docker will look for the Dockerfile in our current directory and use it to build an image.
This could take a while depending on your internet connection.
Now that the image is built, we can run a container based on that image, using this syntax
- docker run -d --name <container_name> -p <host-port:exposed-port> <image-name>
The -d
flag tells docker
to run the container in detached
mode. Meaning, it will run and get you back to your host, without going into the container.
- docker run -d --name angular-client -p 4200:4200 angular-client:dev 8310253fe80373627b2c274c5a9de930dc7559b3dc8eef4abe4cb09aa1828a22
--name
refers to the name that will be assigned to the container.
-p
or --port
refers to which port our host machine should point to in the docker
container. In this case, localhost:4200
should point to dockerhost:4200
, and thus the syntax 4200:4200
.
Visit localhost:4200
in your host browser should be serving the angular
app from the container.
You can stop the container running with:
- docker stop angular-client
We’ve containerized the angular
app, we are now two steps away from our complete setup.
Containerizing an express app should now be straightforward. Create a directory in the mean-docker
directory called express-server
.
- mkdir express-server
Add the following package.json
file inside the app.
{
"name": "express-server",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node server.js"
},
"dependencies": {
"body-parser": "~1.15.2",
"express": "~4.14.0"
}
}
Then, we’ll create a simple express app inside it. Create a file server.js
- cd express-serve
- touch server.js
- mkdir routes && cd routes
- touch api.js
// Get dependencies
const express = require('express');
const path = require('path');
const http = require('http');
const bodyParser = require('body-parser');
// Get our API routes
const api = require('./routes/api');
const app = express();
// Parsers for POST data
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
// Set our api routes
app.use('/', api);
// Get port from environment and store in Express.
const port = process.env.PORT || '3000';
app.set('port', port);
// Create HTTP server.
const server = http.createServer(app);
// Listen on provided port, on all network interfaces.
server.listen(port, () => console.log(`API running on localhost:${port}`));
[labe mean-docker/express-server/routes/api.js]
const express = require('express');
const router = express.Router();
// GET api listing.
router.get('/', (req, res) => {
res.send('api works');
});
module.exports = router;
This is a simple express
app, install the dependencies and start the app.
- npm install
- npm start
Going to localhost:3000
in your browser should serve the app.
To run this app inside a Docker container, we’ll also create a Dockerfile for it. It should be pretty similar to what we already have for the angular-client
.
# Create image based on the official Node 6 image from the dockerhub
FROM node:6
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package.json /usr/src/app
# Install dependencies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app
# Expose the port the app runs in
EXPOSE 3000
# Serve the app
CMD ["npm", "start"]
You can see the file is pretty much the same as the angular-client
Dockerfile, except for the exposed port.
You could also add a .dockerignore
file to ignore files we do not need.
node_modules/
We can then build the image and run a container based on the image with:
- docker build -t express-server:dev .
- docker run -d --name express-server -p 3000:3000 express-server:dev
Going to localhost:3000
in your browser should serve the API.
Once you are done, you can stop the container with
- docker stop express-server
The last part of our MEAN setup, before we connect them all together is the MongoDB. Now, we can’t have a Dockerfile to build a MongoDB image, because one already exists from the Docker Hub. We only need to know how to run it.
Assuming we had a MongoDB image already, we’d run a container based on the image with
- docker run -d --name mongodb -p 27017:27017 mongo
The image name in this instance is mongo
, the last parameter, and the container name will be mongodb
.
Docker will check to see if you have a mongo image already downloaded, or built. If not, it will look for the image in the Dockerhub. If you run the above command, you should have a mongodb
instance running inside a container.
To check if MongoDB is running, simply go to http://localhost:27017
in your browser, and you should see this message. It looks like you are trying to access MongoDB over HTTP on the native driver port.
Alternatively, if you have mongo installed in your host machine, simply run mongo
in the terminal. And it should run and give you the mongo shell, without any errors.
To connect and run multiple containers with docker, we use Docker Compose.
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.
docker-compose
is usually installed when you install docker
. So to simply check if you have it installed, run:
- docker-compose
You should see a list of commands from docker-compose. If not, you can go through the installation here
Note: Ensure that you have docker-compose version 1.6 and above by running docker-compose -v
Create a docker-compose.yml
file at the root of our setup.
- touch docker-compose.yml
Our directory tree should now look like this.
.
├── angular-client
├── docker-compose.yml
└── express-server
Then edit the docker-compose.yml
file
version: '2' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: angular-client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forewarding
express: #name of the second service
build: express-server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forwarding
The docker-compose.yml
file is a simple configuration file telling docker-compose
which containers to build. That’s pretty much it.
Now, to run containers based on the three images, simply run
- docker-compose up
This will build the images if not already built, and run them. Once it’s running, and your terminal looks something like this.
You can visit all three apps: http://localhost:4200
, http://localhost:3000
, or mongodb://localhost:27017
. And you’ll see that all three containers are running.
Finally, the fun part.
We now finally need to connect the three containers. We’ll first create a simple CRUD feature in our API using mongoose. You can go through Easily Develop Node.js and MongoDB Apps with Mongoose to get a more detailed explanation of mongoose.
First of all, add mongoose
to your express
server package.json
{
"name": "express-server",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node server.js"
},
"dependencies": {
"body-parser": "~1.15.2",
"express": "~4.14.0",
"mongoose": "^4.7.0"
}
}
We need to update our API to use MongoDB:
// Import dependencies
const mongoose = require('mongoose');
const express = require('express');
const router = express.Router();
// MongoDB URL from the docker-compose file
const dbHost = 'mongodb://database/mean-docker';
// Connect to mongodb
mongoose.connect(dbHost);
// create mongoose schema
const userSchema = new mongoose.Schema({
name: String,
age: Number
});
// create mongoose model
const User = mongoose.model('User', userSchema);
// GET api listing.
router.get('/', (req, res) => {
res.send('api works');
});
// GET all users.
router.get('/users', (req, res) => {
User.find({}, (err, users) => {
if (err) res.status(500).send(error)
res.status(200).json(users);
});
});
// GET one users.
router.get('/users/:id', (req, res) => {
User.findById(req.param.id, (err, users) => {
if (err) res.status(500).send(error)
res.status(200).json(users);
});
});
// Create a user.
router.post('/users', (req, res) => {
let user = new User({
name: req.body.name,
age: req.body.age
});
user.save(error => {
if (error) res.status(500).send(error);
res.status(201).json({
message: 'User created successfully'
});
});
});
module.exports = router;
Two main differences, first of all, our connection to MongoDB is in the line const dbHost = 'mongodb://database/mean-docker';
. This database
is the same as the database service we created in the docker-compose
file.
We’ve also added rest routes GET /users
, GET /users/:id
and POST /user
.
Update the docker-compose
file, telling the express service to link to the database service.
version: '2' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: angular-client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forewarding
volumes:
- ./angular-client:/app # this will enable changes made to the angular app reflect in the container
express: #name of the second service
build: express-server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forewarding
links:
- database
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forwarding
The links
property of the docker-compose file creates a connection to the other service with the name of the service as the hostname. In this case database
will be the hostname. Meaning, to connect to it from the express
service, we should use database:27017
. That’s why we made the dbHost
equal to mongodb://database/mean-docker
.
Also, I’ve added a volume to the angular
service. This will enable changes we make to the Angular App to automatically trigger recompilation in the container
The last part is to connect the Angular app to the express server. To do this, we’ll need to make some modifications to our angular
app to consume the express
API.
Add the Angular HTTP Client.
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { HttpClientModule } from '@angular/common/http'; // add http client module
import { AppComponent } from './app.component';
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
HttpClientModule // import http client module
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
import { Component, OnInit } from '@angular/core';
import { HttpClient } from '@angular/common/http';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {
title = 'app works!';
// Link to our api, pointing to localhost
API = 'http://localhost:3000';
// Declare empty list of people
people: any[] = [];
constructor(private http: HttpClient) {}
// Angular 2 Life Cycle event when component has been initialized
ngOnInit() {
this.getAllPeople();
}
// Add one person to the API
addPerson(name, age) {
this.http.post(`${this.API}/users`, {name, age})
.subscribe(() => {
this.getAllPeople();
})
}
// Get all users from the API
getAllPeople() {
this.http.get(`${this.API}/users`)
.subscribe((people: any) => {
console.log(people)
this.people = people
})
}
}
Angular best practices guides usually recommend separating most logic into a service/provider. We’ve placed all the code in the component here for brevity.
We’ve imported the OnInit
interface, to call events when the component is initialized, then added two methods AddPerson
and getAllPeople
, that call the API.
Notice that this time around, our API
is pointing to localhost
. This is because while the Angular 2 app will be running inside the container, it’s served to the browser. And the browser is the one that makes requests. It will thus make a request to the exposed Express API. As a result, we don’t need to link Angular and Express in the docker-compose.yml
file.
Next, we need to make some changes to the template. I first added bootstrap via CDN to the index.html
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Angular Client</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap CDN -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.2/css/bootstrap.min.css">
<link rel="icon" type="image/x-icon" href="favicon.ico">
</head>
<body>
<app-root>Loading...</app-root>
</body>
</html>
Then update the app.component.html
template
<!-- Bootstrap Navbar -->
<nav class="navbar navbar-light bg-faded">
<div class="container">
<a class="navbar-brand" href="#">Mean Docker</a>
</div>
</nav>
<div class="container">
<h3>Add new person</h3>
<form>
<div class="form-group">
<label for="name">Name</label>
<input type="text" class="form-control" id="name" #name>
</div>
<div class="form-group">
<label for="age">Age</label>
<input type="number" class="form-control" id="age" #age>
</div>
<button type="button" (click)="addPerson(name.value, age.value)" class="btn btn-primary">Add person</button>
</form>
<h3>People</h3>
<!-- Bootstrap Card -->
<div class="card card-block col-md-3" *ngFor="let person of people">
<h4 class="card-title">{{person.name}} {{person.age}}</h4>
</div>
</div>
The above template shows the components’ properties and bindings. We are almost done.
Since we’ve made changes to our code, we need to do a build for our Docker Compose
- docker-compose up --build
The --build
flag tells docker compose
that we’ve made changes and it needs to do a clean build of our images.
Once this is done, go to localhost:4200
in your browser,
We are getting a No 'Access-Control-Allow-Origin'
error. To quickly fix this, we need to enable Cross-Origin
in our express
app. We’ll do this with a simple middleware.
// Code commented out for brevity
// Parsers for POST data
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
// Cross Origin middleware
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*")
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
next()
})
// Set our api routes
app.use('/', api);
// Code commented out for brevity
We can now run docker-compose
again with the build
flag. You should be in the mean-docker
directory.
- docker-compose up --build
Going to localhost:4200
on the browser.
Note: I added an attached volume to the docker-compose
file, and we now no longer need to rebuild the service every time we make a change.
I bet you’ve learned a thing or two about MEAN or docker
and docker-compose
.
The problem with our set up however is that any time we make changes to either the angular
app or the express
API, we need to run docker-compose up --build
.
This can get tedious or even boring over time. We’ll look at this in another article.
]]>We spend a lot of time writing code. In the early phases of a project, the directory structure doesn’t matter too much and many people tend to ignore best practices. In the short term, this allows the developer to code rapidly, but in the long term will affect code maintainability. AngularJS is still relatively new and developers are still figuring out what works and doesn’t. There are many great ways to structure an app and we’ll borrow some principles from existing mature frameworks but also do some things that are specific to Angular.
In this article, I will cover best practices regarding directory structures for both small and large AngularJS apps. This may be a hot button issue with some developers and while there is no “perfect” way to structure an app, I will be writing from experience and lessons learned from projects I’ve worked on.
First of all, let’s go over what not to do. Many AngularJS tutorials show an app structure that resembles the code below:
app/
----- controllers/
---------- mainController.js
---------- otherController.js
----- directives/
---------- mainDirective.js
---------- otherDirective.js
----- services/
---------- userService.js
---------- itemService.js
----- js/
---------- bootstrap.js
---------- jquery.js
----- app.js
views/
----- mainView.html
----- otherView.html
----- index.html
This is a very typical app structure that I see. On the surface, it seems to make a lot of sense and is very similar to a lot of MVC frameworks. We have a separation of concerns, controllers have their own folder, views have their own folder, external libraries have their own folder, etc.
The main problem with this directory structure is not apparent when you are working with only a handful of views and controllers. In fact, it is preferable to follow this approach when writing a tutorial for example, or for smaller applications. This structure makes it very easy for the reader to visualize and conceptualize the concepts you are covering.
This approach falls apart, however, when you start adding additional functionality to the app. Once you have more than 10 controllers, views, and directives, you are going to have to do a lot of scrolling in your directory tree to find the required files.
For example, say you are building a blog with Angular. You decide that you would like to add the author information to the bottom of each article. Well now, you have to find the blog directive, controller, potentially the service, and finally the view before you can even look at the whole picture and start making edits.
Say a few months down the line, you are adding additional features to your blog and want to rename a particular feature, again it’s a hunt throughout the directory structure to find the affected files, edit them, make sure they are all in sync, and then make the changes.
Let’s get to best practices and what you should be doing to build scalable and maintainable AngularJS apps that your coworkers will love you for. An ideal AngularJS app structure should be modularized into very specific functions. We also want to take advantage of the wonderful AngularJS directives to further compartmentalize our apps. Take a look at a sample directory structure below:
app/
----- shared/ // acts as reusable components or partials of our site
---------- sidebar/
--------------- sidebarDirective.js
--------------- sidebarView.html
---------- article/
--------------- articleDirective.js
--------------- articleView.html
----- components/ // each component is treated as a mini Angular app
---------- home/
--------------- homeController.js
--------------- homeService.js
--------------- homeView.html
---------- blog/
--------------- blogController.js
--------------- blogService.js
--------------- blogView.html
----- app.module.js
----- app.routes.js
assets/
----- img/ // Images and icons for your app
----- css/ // All styles and style related files (SCSS or LESS files)
----- js/ // JavaScript files written for your app that are not for angular
----- libs/ // Third-party libraries such as jQuery, Moment, Underscore, etc.
index.html
This directory structure is much harder to read and understand from the get-go. A newcomer to Angular may be completely turned off by this complex approach, and that is why you see tutorials and examples in Angular following the simpler directory structure found in examples earlier. Let’s dive into the directory structure above and see what’s going on here.
The index.html
lives at the root of front-end structure. The index.html
file will primarily handle loading in all the libraries and Angular elements.
The assets folder is also pretty standard. It will contain all the assets needed for your app that are not related to your AngularJS code. There are many great ways to organize this directory but they are out of scope for this article. The example above is good enough for most apps.
This is where the meat of your AngularJS app will live. We have two subfolders in here and a couple of JavaScript files at the root of the folder. The app.module.js
file will handle the setup of your app, load in AngularJS dependencies, and so on. The app.route.js
file will handle all the routes and the route configuration. After that, we have two subfolders - components and shared. Let’s dive into those next.
The components
folder will contain the actual sections for your Angular app. These will be the static views, directives, and services for that specific section of the site (think an admin users section, gallery creation section, etc). Each page should have its own subfolder with its own controller, services, and HTML files.
Each component here will resemble a mini-MVC application by having a view, controller, and potentially services file(s). If the component has multiple related views, it may be a good idea to further separate these files into ‘views’, ‘controllers’, ‘services’ subfolders.
This can be seen as the simpler folder structure shown earlier in this article, just broken down into sections. So you could essentially think of this as multiple mini Angular applications inside of your giant Angular application.
The shared
folder will contain the individual features that your app will have. These features will ideally be directives that you will want to reuse on multiple pages.
Features such as article posts, user comments, sliders, and others should be crafted as AngularJS Directives. Each component here should have its own subfolder that contains the directive JavaScript file and the template HTML file.
In some instances, a directive may have its own services JavaScript file, and in the case that it does it should also go into this subfolder.
This allows us to have definitive components for our site so that a slider will be a slider across the site. You would probably want to build it so that you could pass in options to extend it. For example, you could have:
<!-- user a slider directive to loop over something -->
<slider id="article-slider" ng-repeat="picture in pictures" size="large" type="square">
</slider>
Now, this slider is accessible from any part of our site so we’re not reinventing the wheel. We also just have to change it in one place, the shared
folder and it will update sitewide.
If you are developing a really large application in AngularJS, you will want to go even further and modularize your app. Here are some additional tips on how to accomplish this.
A good practice here would be to create a Core
subfolder under components and then a subfolder for the Header and Footer and any additional components that will be shared across many pages.
In the structure above we didn’t do this, but another good practice for very large apps is to separate the routes into separate files. For example you might add a blogRoutes.js
file in the /views/blog/
subfolder and there include only the routes relevant to the blog such as /blog/:slug
, /blog/:slug/edit
, blog/tags:/tags
, etc.
If you do decide to opt-in and build your AngularJS apps in a modularized fashion, be sure to concatenate and minify your code before going into production. There are many great extensions for both Grunt and Gulp that will help with this - so don’t be afraid to split code up as much as you need.
You may not want to necessarily have just one giant .js
file for your entire app, but concatenating your app into a few logical files like:
app.js
(for app initialization, config, and routing)services.js
(for all the services)This will be greatly beneficial for reducing initial load times of your app.
If you need some more tips on minifying, check out our guide: Declaring AngularJS Modules For Minification
This is more of a general tip, but this will save you a headache in the future, when writing components and you need multiple files for the component, try to name them in a consistent pattern. For example, blogView.html
, blogServices.js
, blogController.js
.
The example above shows a modularized approach to building AngularJS. The benefits of this approach include:
Following the approach above will logically compartmentalize your apps and you will easily be able to locate and edit code.
Your code will be much easier to scale. Adding new directives and pages will not add bloat to existing folders. Onboarding new developers should also be much easier once the structure is explained. Additionally, with this approach, you will be able to drop features in and out of your app with relative ease so testing new functionality or removing it should be a breeze.
Debugging your code will be much easier with this modularized approach to app development. It will be easier to find the offending pieces of code and fix them.
Writing test scripts and testing modernized apps is a whole lot easier than non-modularized ones.
To conclude, this article covered some of the best practices in regards to structuring an AngularJS app. It is easy to ignore good practices in order to save time up front. We all have a tendency to just want to start writing code. Sometimes this passion can hurt us in the long run when our awesome apps grow and become popular and then we’re stuck rewriting or even worse maintaining badly thought out code. I hope this article had some helpful tips.
I plan on building a barebones AngularJS application structure that should follow the best practices outlined in this article that will help you get started building Angular apps quickly and efficiently. Keep a lookout for that in the coming weeks. Stay tuned for Part 2 where we put these concepts into practice!
In the meantime, be sure to check out John Papa’s AngularJS Style Guide for additional tips on AngularJS best practices, and while you’re at it give Todd Motto’s AngularJS Guide a look too.
]]>An upcoming Vue update was set to have classes implemented. In React and Angular, we can create components using JavaScript classes. Some people prefer this way of component creation as it can lead to better readability. It can be a confusing tool though since people start to think of JavaScript classes as classes in other languages that have inheritance. JavaScript classes are just syntactical sugar over JavaScript functions however and classes can lead to a bit of confusion.
In Vue, we create components using objects like so:
// standalone
new Vue({ })
// using the CLI
<script>
export default { }
</script>
There was a proposal started on February 26, 2019 on GitHub that would allow us to create components with classes in addition to objects. This was targeted for the Vue 3.0 release.
Here were the initially proposed classes:
class App extends Vue {
// options declared via static properties (stage 3)
// more details below
static template = `<div @click="increment">
{{ count }} {{ plusOne }}
</div>`
// reactive data declared via class fields (stage 3)
// more details below
count = 0
// lifecycle
created() {
console.log(this.count)
}
// getters are converted to computed properties
get plusOne() {
return this.count + 1
}
// a method
increment() {
this.count++
}
}
<template>
<div @click="increment">
{{ count }} {{ plusOne }}
<Foo />
</div>
</template>
<script>
import Vue from 'vue'
import Foo from './Foo.vue'
export default class App extends Vue {
static components = {
Foo
}
count = 0
created() {
console.log(this.count)
}
get plusOne() {
return this.count + 1
}
increment() {
this.count++
}
}
</script>
Pulled directly from the RFC on GitHub:
Vue’s current object-based component API has created some challenges when it comes to type inference. As a result, most users opting into using Vue with TypeScript end up using vue-class-component. This approach works, but with some drawbacks:
vue-class-component
had to implement some inefficient workarounds in order to provide the desired API without altering Vue internals.vue-class-component
has to maintain typing compatibility with Vue core, and the maintenance overhead can be eliminated by exposing the class directly from Vue core.The primary motivation of native class support is to provide a built-in and more efficient replacement for vue-class-component
. The affected target audience are most likely also TypeScript users.
The API is also designed to not rely on anything TypeScript specific: it should work equally well in plain ES, for users who prefer using native ES classes.
Note: We are not pushing this as a replacement for the existing object-based API - the object-based API will continue to work in 3.0.
There are two major reasons why the Class API proposal was dropped:
Composition functions and Classes and Objects would allow us to make the same component 3 different ways. Vue has always focused on developer experience so it’s comforting to see them try to simplify the developer experience again. They feel that 3 ways to do the same thing is not the best.
With the coming composition functions, TypeScript support is one of the main benefits. Support is better in this approach than in the classes approach.
With the two new APIs #22 Advanced Reactivity API and #23 Dynamic Lifecycle Injection, we have a new way of declaring component logic: using function calls.. These are inspired by React Hooks.
In composition functions, a component’s logic will happen in a new setup()
method. It is pretty much data()
but gives us more flexibility using function calls inside of it.
// everything tree-shakable
import {
value,
computed,
watch,
onMounted,
inject
} from 'vue'
const App = {
// same as before
props: {
a: String,
b: Number
},
// same as before
components: {
// ...
},
setup(props) {
// data
const count = value(1)
// computed
const plusOne = computed(() => count.value + 1)
// methods
function inc() {
count.value++
}
// watch
watch(() => props.b + count.value, val => {
console.log('changed: ', val)
})
// lifecycle
onMounted(() => {
console.log('mounted!')
})
// dependency injection
const injected = inject(SomeSymbol)
// other options like el, extends and mixins are no longer necessary
// expose bindings on render context
// any value containers will be unwrapped when exposed
// any non-containers will be exposed as-is, including functions
return {
count,
plusOne,
inc,
injected
}
},
// template: `same as before`,
render({ state, props, slots }) {
// `this` points to the render context and works same as before (exposes everything)
// `state` exposes bindings returned from `setup()` (with value wrappers unwrapped)
}
}
I’m excited to see where the Vue team goes with these “composition functions”. I like the idea of thinking of our components more as composed parts since that’s more in line with a JavaScript way of thinking. Classes lead to people thinking in a more object-oriented way.
This also leans towards a thought process that is similar to how React is moving with React Hooks. The “composition functions” also allow for better TypeScript support which in turn leads to a better developer experience and tooling.
I’m looking forward to seeing where Vue goes next. I don’t mind Vue’s way of declaring components with objects and it’s looking good with the way they are sticking with it.
What are your thoughts on the dropping of the classes proposal?
]]>Allowing users to log in to your app is one of the most common features you’ll add to a web app you build. This article will cover how to add simple authentication to your Flask app. The main package we will use to accomplish this is Flask Login.
We’re going to build some signup and login pages that allow our app to allow users to log in and access protected pages that non-logged-in users can see. We’ll grab information from the user model and display it on our protected pages when the user logs in to simulate what a profile would look like.
We will cover the following in this article:
Our app will use the Flask app factory pattern with blueprints. We’ll have one blueprint that handles everything auth-related, and we’ll have another blueprint for our regular routes, which include the index and the protected profile page. In a real app, of course, you can break down the functionality in any way you like, but what I’ve proposed will work well for this tutorial.
To start, we need to create the directories and files for our project.
- project
---- templates
-------- base.html <!-- contains common layout and links -->
-------- index.html <!-- show the home page -->
-------- login.html <!-- show the login form -->
-------- profile.html <!-- show the profile page -->
-------- signup.html <!-- show the signup form -->
---- __init__.py <!-- setup our app -->
---- auth.py <!-- the auth routes for our app -->
---- main.py <!-- the non-auth routes for our app -->
---- models.py <!-- our user model -->
You can create those files and we’ll add them as we progress along.
There are three main packages we need for our project:
We’ll only be using SQLite for the database to avoid having to install any extra dependencies for the database. Here’s what you need to run after creating your virtual environment to install the packages.
- pip install flask flask-sqlalchemy flask-login
Let’s start by creating the __init__.py
file for our project. This will have the function to create our app which will initialize the database and register our blueprints. At the moment this won’t do much, but it will be needed for the rest of our app. All we need to do is initialize SQLAlchemy, set some configuration values, and register our blueprints here.
__init__.py
from flask import Flask__
from flask_sqlalchemy import SQLAlchemy
# init SQLAlchemy so we can use it later in our models
db = SQLAlchemy()
def create_app():
app = Flask(__name__)
app.config['SECRET_KEY'] = '9OLWxND4o83j4K4iuopO'
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///db.sqlite'
db.init_app(app)
# blueprint for auth routes in our app
from .auth import auth as auth_blueprint
app.register_blueprint(auth_blueprint)
# blueprint for non-auth parts of app
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
return app
Now that we have the main app file, we can start adding in our routes.
For our routes, we’ll use two blueprints. For our main blueprint, we’ll have a home page (/
) and profile page (/profile
) after we log in. If the user tries to access the profile page without being logged in, they’ll be sent to our login route.
For our auth blueprint, we’ll have routes to retrieve both the login page (/login
) and the signup page (/signup
). We’ll also have routes for handling the POST request from both of those two routes. Finally, we’ll have a logout route (/logout
) to log out an active user.
Let’s go ahead and add them even though they won’t do much. Later we will update them so we can use them.
main.py
from flask import Blueprint
from . import db
main = Blueprint('main', __name__)
@main.route('/')
def index():
return 'Index'
@main.route('/profile')
def profile():
return 'Profile'
auth.py
from flask import Blueprint
from . import db
auth = Blueprint('auth', __name__)
@auth.route('/login')
def login():
return 'Login'
@auth.route('/signup')
def signup():
return 'Signup'
@auth.route('/logout')
def logout():
return 'Logout'
You can now set the FLASK_APP
and FLASK_DEBUG
values and run the project. You should be able to view navigate to the five possible URLs and see the text returned.
- export FLASK_APP=project
- export FLASK_DEBUG=1
- flask run
Let’s go ahead and create the templates that are used in our app. This is the first step before we can implement the actual login functionality. Our app will use four templates:
index.html
profile.html
login.html
signup.html
We’ll also have a base template that will have code common to each of the pages. In this case, the base template will have navigation links and the general layout of the page. Let’s create them now.
templates/base.html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Flask Auth Example</title>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.2/css/bulma.min.css" />
</head>
<body>
<section class="hero is-primary is-fullheight">
<div class="hero-head">
<nav class="navbar">
<div class="container">
<div id="navbarMenuHeroA" class="navbar-menu">
<div class="navbar-end">
<a href="{{ url_for('main.index') }}" class="navbar-item">
Home
</a>
<a href="{{ url_for('main.profile') }}" class="navbar-item">
Profile
</a>
<a href="{{ url_for('auth.login') }}" class="navbar-item">
Login
</a>
<a href="{{ url_for('auth.signup') }}" class="navbar-item">
Sign Up
</a>
<a href="{{ url_for('auth.logout') }}" class="navbar-item">
Logout
</a>
</div>
</div>
</div>
</nav>
</div>
<div class="hero-body">
<div class="container has-text-centered">
{% block content %}
{% endblock %}
</div>
</div>
</section>
</body>
</html>
templates/index.html
{% extends "base.html" %}
{% block content %}
<h1 class="title">
Flask Login Example
</h1>
<h2 class="subtitle">
Easy authentication and authorization in Flask.
</h2>
{% endblock %}
templates/login.html
{% extends "base.html" %}
{% block content %}
<div class="column is-4 is-offset-4">
<h3 class="title">Login</h3>
<div class="box">
<form method="POST" action="/login">
<div class="field">
<div class="control">
<input class="input is-large" type="email" name="email" placeholder="Your Email" autofocus="">
</div>
</div>
<div class="field">
<div class="control">
<input class="input is-large" type="password" name="password" placeholder="Your Password">
</div>
</div>
<div class="field">
<label class="checkbox">
<input type="checkbox">
Remember me
</label>
</div>
<button class="button is-block is-info is-large is-fullwidth">Login</button>
</form>
</div>
</div>
{% endblock %}
templates/signup.html
{% extends "base.html" %}
{% block content %}
<div class="column is-4 is-offset-4">
<h3 class="title">Sign Up</h3>
<div class="box">
<form method="POST" action="/signup">
<div class="field">
<div class="control">
<input class="input is-large" type="email" name="email" placeholder="Email" autofocus="">
</div>
</div>
<div class="field">
<div class="control">
<input class="input is-large" type="text" name="name" placeholder="Name" autofocus="">
</div>
</div>
<div class="field">
<div class="control">
<input class="input is-large" type="password" name="password" placeholder="Password">
</div>
</div>
<button class="button is-block is-info is-large is-fullwidth">Sign Up</button>
</form>
</div>
</div>
{% endblock %}
templates/profile.html
{% extends "base.html" %}
{% block content %}
<h1 class="title">
Welcome, Anthony!
</h1>
{% endblock %}
Once you’ve added the templates, we can update the return statements in each of the routes we have to return the templates instead of the text.
main.py
from flask import Blueprint, render_template
...
@main.route('/')
def index():
return render_template('index.html')
@main.route('/profile')
def profile():
return render_template('profile.html')
auth.py
from flask import Blueprint, render_template
...
@auth.route('/login')
def login():
return render_template('login.html')
@auth.route('/signup')
def signup():
return render_template('signup.html')
For example, here is what the signup page looks like if you navigate to /signup
. You should be able to see the pages for /
, /login
, and /profile
as well. We’ll leave /logout
alone for now because it won’t display a template when it’s done.
Our user model represents what it means for our app to have a user. To keep it simple, we’ll have fields for an email address, password, and name. Of course in your application, you may decide you want much more information to be stored per user. You can add things like birthday, profile picture, location, or any user preferences.
Models created in Flask-SQLAlchemy are represented by classes which then translate to tables in a database. The attributes of those classes then turn into columns for those tables.
Let’s go ahead and create that user model.
models.py
from . import db
class User(db.Model):
id = db.Column(db.Integer, primary_key=True) # primary keys are required by SQLAlchemy
email = db.Column(db.String(100), unique=True)
password = db.Column(db.String(100))
name = db.Column(db.String(1000))
Like I said before, we’ll be using a SQLite database. We could create a SQLite database on our own, but let’s have Flask-SQLAlchemy do it for us.
We already have the path of the database specified in the __init__.py
file, so we just need to tell Flask-SQLAlchemy to create the database for us in the Python REPL.
If you stop your app and open up a Python REPL, we can create the database using the create_all
method on the db object.
from project import db, create_app
db.create_all(app=create_app()) # pass the create_app result so Flask-SQLAlchemy gets the configuration.
You should now see a db.sqlite file in your project directory. This database will have our user table in it.
Now that we have everything set up, we can finally get to writing the code for the authorization.
For our sign-up function, we’re going to take the data the user types into the form and add it to our database. But before we add it, we need to make sure the user doesn’t already exist in the database. If it doesn’t, then we need to make sure we hash the password before placing it into the database, because we don’t want our passwords stored in plaintext.
Let’s start by adding a second function to handle the POSTed form data. In this function, we will gather the data passed from the user first.
Let’s start by creating the function and adding a redirect to the bottom because we know when we add the user to the database, we will redirect to the login route.
auth.py
from flask import Blueprint, render_template, redirect, url_for
...
@auth.route('/signup', methods=['POST'])
def signup_post():
# code to validate and add user to database goes here
return redirect(url_for('auth.login'))
Now, let’s add the rest of the code necessary for signing up a user.
To start, we’ll have to use the request object to get the form data. If you’re not familiar with the request object, I wrote an article on it here: How To Process Incoming Request Data in Flask
auth.py
from flask import Blueprint, render_template, redirect, url_for, request
from werkzeug.security import generate_password_hash, check_password_hash
from .models import User
from . import db
...
@auth.route('/signup', methods=['POST'])
def signup_post():
email = request.form.get('email')
name = request.form.get('name')
password = request.form.get('password')
user = User.query.filter_by(email=email).first() # if this returns a user, then the email already exists in database
if user: # if a user is found, we want to redirect back to signup page so user can try again
return redirect(url_for('auth.signup'))
# create new user with the form data. Hash the password so plaintext version isn't saved.
new_user = User(email=email, name=name, password=generate_password_hash(password, method='sha256'))
# add the new user to the database
db.session.add(new_user)
db.session.commit()
return redirect(url_for('auth.login'))
Now that we have the signup method done, we should be able to create a new user. Use the form to create a user.
There are two ways you can verify if the sign up worked: you can use a database viewer to look at the row that was added to your table, or you can simply try signing up with the same email address again, and if you get an error, you know the first email was saved properly. So let’s take that approach.
We can add code to let the user know the email already exists and tell them to go to the login page. By calling the flash function, we will send a message to the next request, which in this case, is the redirect. The page we land on will then have access to that message in the template.
First, we add the flash before we redirect back to our signup page.
auth.py
from flask import Blueprint, render_template, redirect, url_for, request, flash
...
@auth.route('/signup', methods=['POST'])
def signup_post():
...
if user: # if a user is found, we want to redirect back to signup page so user can try again
flash('Email address already exists')
return redirect(url_for('auth.signup'))
To get the flashed message in the template, we can add this code above the form. This will display the message directly above the form.
templates/signup.html
...
{% with messages = get_flashed_messages() %}
{% if messages %}
<div class="notification is-danger">
{{ messages[0] }}. Go to <a href="{{ url_for('auth.login') }}">login page</a>.
</div>
{% endif %}
{% endwith %}
<form method="POST" action="/signup">
...
The login method is similar to the signup function in that we will take the user information and do something with it. In this case, we will compare the email address entered to see if it’s in the database. If so, we will test the password the user provided by hashing the password the user passes in and comparing it to the hashed password in the database. We know the user has entered the correct password when both hashed passwords match.
Once the user has passed the password check, we know that they have the correct credentials and we can go ahead and log them in using Flask-Login. By calling login_user
, Flask-Login will create a session for that user that will persist as the user stays logged in, which will allow the user to view protected pages.
We can start with a new route for handling the POSTed data. We’ll redirect to the profile page when the user successfully logs in.
auth.py
...
@auth.route('/login', methods=['POST'])
def login_post():
# login code goes here
return redirect(url_for('main.profile'))
Now, we need to verify if the user has the correct credentials.
auth.py
...
@auth.route('/login', methods=['POST'])
def login_post():
email = request.form.get('email')
password = request.form.get('password')
remember = True if request.form.get('remember') else False
user = User.query.filter_by(email=email).first()
# check if user actually exists
# take the user supplied password, hash it, and compare it to the hashed password in database
if not user or not check_password_hash(user.password, password):
flash('Please check your login details and try again.')
return redirect(url_for('auth.login')) # if user doesn't exist or password is wrong, reload the page
# if the above check passes, then we know the user has the right credentials
return redirect(url_for('main.profile'))
Let’s add in the block in the template so the user can see the flashed message. Like the signup form, let’s add the potential error message directly above the form.
templates/login.html
...
{% with messages = get_flashed_messages() %}
{% if messages %}
<div class="notification is-danger">
{{ messages[0] }}
</div>
{% endif %}
{% endwith %}
<form method="POST" action="/login">
So we have the ability to say a user has been logged in successfully, but there is nothing to actually log the user in anywhere. This is where we bring in Flask-Login.
But first, we need a few things for Flask-Login to work.
We start by adding something called the UserMixin to our User model. The UserMixin will add Flask-Login attributes to our model so Flask-Login will be able to work with it.
models.py
from flask_login import UserMixin
from . import db
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True) # primary keys are required by SQLAlchemy
email = db.Column(db.String(100), unique=True)
password = db.Column(db.String(100))
name = db.Column(db.String(1000))
Then, we need to specify our user loader. A user loader tells Flask-Login how to find a specific user from the ID that is stored in their session cookie. We can add this in our create_app
function along with basic init code for Flask-Login.
__init__.py
...
from flask_login import LoginManager
def create_app():
...
db.init_app(app)
login_manager = LoginManager()
login_manager.login_view = 'auth.login'
login_manager.init_app(app)
from .models import User
@login_manager.user_loader
def load_user(user_id):
# since the user_id is just the primary key of our user table, use it in the query for the user
return User.query.get(int(user_id))
Finally, we can add the login_user
function just before we redirect to the profile page to create the session.
auth.py
from flask_login import login_user
from .models import User
...
@auth.route('/login', methods=['POST'])
def login_post():
# if the above check passes, then we know the user has the right credentials
login_user(user, remember=remember)
return redirect(url_for('main.profile'))
With Flask-Login setup, we can finally use the /login
route.
When everything is successful, we will see the profile page.
If your name isn’t also Anthony, then you’ll see that your name is wrong. What we want is the profile to display the name in the database. So first, we need to protect the page and then access the user’s data to get the name.
To protect a page when using Flask-Login is very simple: we add the @login_requried
decorator between the route and the function. This will prevent a user who isn’t logged in from seeing the route. If the user isn’t logged in, the user will get redirected to the login page, per the Flask-Login configuration.
With routes that are decorated with the login_required
decorator, we then have the ability to use the current_user
object inside of the function. This current_user
represents the user from the database, and we can access all of the attributes of that user with dot notation. For example, current_user.email
, current_user.password
, current_user.name
, and current_user.id
will return the actual values stored in the database for the logged-in user.
Let’s use the name of the current user and send it to the template. We then will use that name and display its value.
main.py
from flask import Blueprint, render_template
from flask_login import login_required, current_user
...
@main.route('/profile')
@login_required
def profile():
return render_template('profile.html', name=current_user.name)
templates/profile.html
...
<h1 class="title">
Welcome, {{ name }}!
</h1>
Once we go to our profile page, we then see that the user’s name appears.
The final thing we can do is update our logout view. We can call the logout_user
function in a route for logging out. We have the login_required
decorator because it doesn’t make sense to log out a user who isn’t logged in, to begin with.
from flask_login import login_user, logout_user, login_required
...
@auth.route('/logout')
@login_required
def logout():
logout_user()
return redirect(url_for('main.index'))
After we log out and try viewing the profile page again, we see an error message appear. This is because Flask-Login flashes a message for us when the user isn’t allowed to access a page.
One last thing we can do is put if statements in the templates to display only the links relevant to the user. So before the user logins in, they will have the option to log in or signup. After they have logged in, they can go to their profile or log out.
templates/base.html
...
<div class="navbar-end">
<a href="{{ url_for('main.index') }}" class="navbar-item">
Home
</a>
{% if current_user.is_authenticated %}
<a href="{{ url_for('main.profile') }}" class="navbar-item">
Profile
</a>
{% endif %}
{% if not current_user.is_authenticated %}
<a href="{{ url_for('auth.login') }}" class="navbar-item">
Login
</a>
<a href="{{ url_for('auth.signup') }}" class="navbar-item">
Sign Up
</a>
{% endif %}
{% if current_user.is_authenticated %}
<a href="{{ url_for('auth.logout') }}" class="navbar-item">
Logout
</a>
{% endif %}
</div>
We’ve done it! We have used Flask-Login and Flask-SQLAlchemy to build a very basic login system for our app. We covered how to authenticate a user by first creating a user model and storing the user information for later. Then we had to verify the user’s password was correct by hashing the password from the form and comparing it to the one stored in the database. Finally, we added authorization to our app by using the @login_required
decorator on a profile page so only logged-in users can see that page.
What we created in this tutorial will be sufficient for smaller apps, but if you wish to have more functionality from the beginning, you may want to consider using either the Flask-User or Flask-Security libraries, which are both build on top of the Flask-Login library.
]]>In this blog post, we’ll be learning about NgClass
and NgStyle
in Angular v2.x. Throughout this blog post, Angular means Angular version greater than 2.x unless stated otherwise.
For AngularJS v1.x styling, see our other article: The Many Ways To Use ngClass.
Creating dynamic styles in web applications can be a real pain. Luckily with Angular, we have multiple ways to create dynamic stylings to our application.
However, let us take a quick view of some of what we aim to achieve with ngStyle
and ngClass
.
First of all, let us look at how we change the class and style of an element in pure JavaScript.
And for that, we would need to create a small div
like the one below:
<div id="my_id">This is a div written in black.</div>
To change the style of the above div
and class
in pure JavaScript, we would need to do this:
var divToChange = document.getElemetById('my_id');
//to change the class we would do.
divToChange.className = "newclass";
//if we want to add multiple classes, we could just do
divToChange.className = "newclass secondclass thirldclass";
//if we want to add a class name without removing the class present before, we do:
divToChange.className = divToChange.className.concat(" addedwit");
//to change the background color of such an element, we would also have to do.
divToChange.style.background-color = "red";
//to change the color of such an element would need
divToChange.style.color = "white";
//Which we would agree is a bit more stressful than what angular ships with us.
If we look at the above code, we can notice that we had to first get the element by id, then assessing its class name, before we start changing the values, them using compact, etc.
We would also notice that in some cases, we would need to reach the property of a property (e.g., divToChange.style.background-color
) before assigning a value to it.
We would both agree that this can be a very tedious method of dealing with just styles and class names.
Angular makes many parts of development easier including styling. Now, let’s see how these are taken care of in Angular.
First of all, this tutorial believes that you know:
In case you do not have the Angular CLI installed, you can run the following command.
- sudo npm install -g angular-cli
Once the Angular CLI has been installed, let us create a new project. So we run:
- ng new angular-class-style
The above command creates a new project called angular-class-style.
Once done, we change the directory into our Angular project and run ng serve
.
- # change directory into our app directory
- cd angular-class-style
-
- # serve the application
- ng serve
We should see this:
Now let’s get started with Style:
In Angular, there are two methods of passing styles into elements.
[style.property]
BindingIn the first instance, we can bind something like [style.color]='red'
.
This kind of styling would make the color of the element red. Similarly, we can also alter any style element that way by passing a string as the value. However, to be more dynamic, we can pass a dynamic style using a variable that exists in the component.
Let’s take this for example. Open up your src/app/app.component.ts
file, we would replace it with the following content.
//import the angular component from angular core
import { Component } from '@angular/core';
@Component({
// define the selector for your app
selector: 'app-root',
//pass in the template url
templateUrl: './app.component.html',
//pass in the css of the component
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'app works!';
//set a property that holds a random color for our style.
randomcolor = this.getRandomColor();
//function to get random colors
public getRandomColor() {
var letters = '0123456789ABCDEF'.split('');
var color = '#';
for (var i = 0; i < 6; i++){
color += letters[Math.floor(Math.random() * 16)];
}
return color;
}
//function to set a new random color
setColor() {
this.randomcolor = this.getRandomColor();
}
}
What we have done is very simple. We have set a property called randomcolor
as a variable that holds the background color, and we have immediately set it to the value of our random generator function.
Next, we defined a random generator function, that randomly generates colours by splitting letters and making sure it returns a color format like #ffffff
Then we defined a setcolor
function that sets the variable to another randomly generated color.
After doing this, we should update our src/app/app.component.html
to the following structure:
<h1>
{{title}}
</h1>
<!---style binding for colors -->
<h2>Style Binding using 'style.color' directive</h2>
<!--call the random color property to set the color of this div -->
<div [style.color]="randomcolor"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the color dynamically -->
<button (click)="setColor()"> Set my color </button>
Here, we have defined our HTML structure for the component.
We have four main elements on the piece of code.
div
which we have attached a style binding to.setColor
function from our component.If we close the page and compile, we should see something like this:
[ngStyle]
BindingAnother method using style is to use the [ngStyle]
property directly, which allows us to pass objects into it. For example [ngStyle]="{'color':'white', 'font-size':'17px'}"
which also allows us to set those styles dynamically.
Let’s take a look at the following example. Update your src/app/app.component.ts
to this:
//import the angular component from angular core
import { Component } from '@angular/core';
@Component({
// define the selector for your app
selector: 'app-root',
//pass in the template url
templateUrl: './app.component.html',
//pass in the css of the component
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'app works!';
//set a property that holds a random color for our style.
randomcolor=this.getRandomColor();
//declare the fontsize and background color properties
public font_size="12px";
public background_color="grey ";
//function to get random colors
public getRandomColor(){
var letters = '0123456789ABCDEF'.split('');
var color = '#';
for (var i = 0; i < 6; i++){
color += letters[Math.floor(Math.random() * 16)];
}
return color;
}
//function to set a new random color
setColor(){
this.randomcolor=this.getRandomColor();
}
}
Now let’s look at what has changed:
We added two properties into our app.component.ts
file named font_size
and background_color
. These two properties are responsible for changing the style dynamically.
Currently, they are just properties holding default values, and nothing special happens with them.
Now lets also update our src/app/app.component.html
to this:
<h1>
{{title}}
</h1>
<!---style binding for colors -->
<h2>Style Binding using 'style.color' directive</h2>
<!--call the random color property to set the color of this div -->
<div [style.color]="randomcolor"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the color dynamically -->
<button (click)="setColor()"> Set my color </button>
<!---style bindning for ngStyle -->
<h2>Style Binding using 'ngStyle' directive</h2>
<!--call the style object to style class -->
<div [ngStyle]="{
'color': getRandomColor(),
'font-size': font_size,
'background-color': background_color
}"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the style dynamically -->
<input type="text" [(ngModel)]="background_color" placeholder="background_color">
<input type="text" [(ngModel)]="font_size" placeholder="font_size">
Let us take a brief look at what we added to the HTML structure:
We added some extra HTML elements to the page. A new heading, to specify what we are doing, a new div
with our ngStyle
binding, with an object passed to it.
One thing you might notice is that the object passed to it looks a lot like a CSS class.
In fact, it is almost a CSS class, except that we pass in variables to it and also functions.
Can you notice our getRandomColor
function is being called here again to set the color? Not that it is compulsory, but rather than hard code the color, we decided to give it some spice.
We now have two new elements which are input buttons, with ngModel
bindings to the variable declared in the style, for reactivity purposes.
After adding these files and compiling, your page should look like this:
Now let’s look at what has changed:
Although we set some default parameters, let’s move into our app.component.html
file to see what’s there.
We added a new div which is bonded using ng style, and we then passed an object to it. The object consists of 3 properties which are style properties namely: color
, background-color
, and font-size
.
The color
attribute is set to a random color from our random color function, while both the background-color
and font-size
are preset.
Just after then, we find two inputs with ngModel
binding to the font-size
and color
, which makes those fields reactive.
So if I were to type 18px
to the font-size
box, I get text of about 18 pixels. If I am to type in orange
to the background-color
box, I get a background color of orange.
Now let us move into using the class
directives.
[className]
directiveLet us start by using the [className]
directive:
Let us open up a file called src/app/app.component.css
and add some CSS classes into it.
.style1 {
font-family: verdana;
font-size: 20px;
}
.style2 {
color: red;
text-align: center;
}
What we have done here is to declare two classes, namely: style1
and style2
with different CSS properties. However, let’s go on and see what we would use them for later on.
Now let us open up our src/app/app.component.html
and replace it with the following content:
<h1>
{{title}}
</h1>
<!---style binding for colors -->
<h2>Style Binding using 'style.color' directive</h2>
<!--call the random color property to set the color of this div -->
<div [style.color]="randomcolor"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the color dynamically -->
<button (click)="setColor()"> Set my color </button>
<!---style binding for ngStyle -->
<h2>Style Binding using 'ngStyle' directive</h2>
<!--call the style object to style class -->
<div [ngStyle]="{
'color': getRandomColor(),
'font-size': font_size,
'background-color': background_color
}"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the style dynamically -->
<input type="text" [(ngModel)]="background_color" placeholder="background_color">
<input type="text" [(ngModel)]="font_size" placeholder="font_size">
<!---class binding for ClassName -->
<h2>Class Binding using 'className' directive</h2>
<!--call the ngclass object to add a class name to it. -->
<div [className]="'style1'"> I would be classed using class name
If you notice, the font for the last text had changed.
What have we done here?
We have added some classes to our CSS files, and we have used the className
directive to specify the class we want in our HTML.
This is how what we have would look at now:
Although this might not make sense as to why you would do this, as it is the same as doing class="style1"
directly. But let us take a look at this example to see when using class names can be useful.
Open up your src/app/appcomponent.ts
file and replace it with the following code:
//import the angular component from angular core
import { Component } from '@angular/core';
@Component({
// define the selector for your app
selector: 'app-root',
//pass in the template url
templateUrl: './app.component.html',
//pass in the css of the component
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'app works!';
//set a property that holds a random color for our style.
randomcolor=this.getRandomColor();
//declare the fontsize and background color properties
public font_size="12px";
public background_color="grey ";
//declare a variable to hold class name:
public my_Class = 'style1';
//function to get random colors
public getRandomColor(){
var letters = '0123456789ABCDEF'.split('');
var color = '#';
for (var i = 0; i < 6; i++){
color += letters[Math.floor(Math.random() * 16)];
}
return color;
}
//function to set a new random color
setColor(){
this.randomcolor=this.getRandomColor();
}
//function to change the class from style1 to style 2 when clicked
toggle_class(){
if(this.my_Class=="style1"){
this.my_Class='style2';
}else{
this.my_Class='style1';
}
}
}
If you look at the above code, we have only added one extra property, which is public my_Class='style1'
which is just a holder for the class we are calling, and one extra method, for changing the class value of the my_class
property, which basically just toggles the classes.
Now let’s add the following HTML code:
<h1>
{{title}}
</h1>
<!---style binding for colors -->
<h2>Style Binding using 'style.color' directive</h2>
<!--call the random color property to set the color of this div -->
<div [style.color]="randomcolor"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the color dynamically -->
<button (click)="setColor()"> Set my color </button>
<!---style binding for ngStyle -->
<h2>Style Binding using 'ngStyle' directive</h2>
<!--call the style object to style class -->
<div [ngStyle]="{
'color': getRandomColor(),
'font-size': font_size,
'background-color': background_color
}"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the style dynamically -->
<input type="text" [(ngModel)]="background_color" placeholder="background_color">
<input type="text" [(ngModel)]="font_size" placeholder="font_size">
<!---class binding for ClassName -->
<h2>Class Binding using 'className' directive</h2>
<!--call the ngclass object to add a class name to it. -->
<div [className]="'style1'"> I would be classed using classname
<!---class binding for ClassName -->
<h2>Class Binding using 'className' directive and variable</h2>
<!--call the ngclass object to add a class name to it. -->
<div [className]="my_Class"> I would be classed using classname</div>
<!--button to change the class -->
<button (click)="toggle_class()">Toggle_class</button>
If we click the button, we notice the classes are being swapped.
So what have we done here?
We have used the class name attribute to specify our class, which is being toggled using the button provided.
ngClass
BindingAnother class binding we can do is using the ngClass
binding.
We can use the ngClass
to load classes dynamically too.
Just like the class name, we can load the classes either by string, by an array, or even by objects.
The usage of object is the bigger advantage ngClass has over className
e.g.:
<div [ngClass]="['style1', 'style2']">array of classes
<div [ngClass]="'style1 style2'">string of classes
<div [ngClass]="{'style1': true, 'style2': true}">object of classes</div>
It should be noted that both the three methods in the example above, would give the same result.
But let’s take a look at the third option that uses an object.
We are allowed to pass an object to the ngClass
directive. The object contains a key of all the styles we want to load and a boolean
value of either true
or false
.
Let’s take a look at the example below:
Open up your src/app/appcomponent.ts
and replace it with this content:
//import the angular component from angular core
import { Component } from '@angular/core';
@Component({
// define the selector for your app
selector: 'app-root',
//pass in the template url
templateUrl: './app.component.html',
//pass in the css of the component
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'app works!';
//set a property that holds a random color for our style.
randomcolor=this.getRandomColor();
//declare the fontsize and background color properties
public font_size="12px";
public background_color="grey ";
//declare a variable to hold class name:
public my_Class = 'style1';
//variable to hold boolean value to style1
isClass1Visible: false;
//variable to hold boolean value to style2
isClass2Visible: false;
//function to get random colors
public getRandomColor(){
var letters = '0123456789ABCDEF'.split('');
var color = '#';
for (var i = 0; i < 6; i++){
color += letters[Math.floor(Math.random() * 16)];
}
return color;
}
//function to set a new random color
setColor(){
this.randomcolor=this.getRandomColor()
}
//function to change the class from style1 to style 2 when clicked
toggle_class(){
if(this.my_Class=="style1"){
this.my_Class='style2';
}else{
this.my_Class='style1';
}
}
}
In the above code, we would notice the addition of two properties and two variables which are: isClass1Visible
and isClass2Visible
.
The two properties hold the default boolean value for both style1 and style2.
Now let’s update our HTML structure to this:
<h1>
{{title}}
</h1>
<!---style binding for colors -->
<h2>Style Binding using 'style.color' directive</h2>
<!--call the random color property to set the color of this div -->
<div [style.color]="randomcolor"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the color dynamically -->
<button (click)="setColor()"> Set my color </button>
<!---style binding for ngStyle -->
<h2>Style Binding using 'ngStyle' directive</h2>
<!--call the style object to style class -->
<div [ngStyle]="{
'color': getRandomColor(),
'font-size': font_size,
'background-color': background_color
}"> I would be styled with different colors dynamically </div>
<!--attach a click function to this button to set the style dynamically -->
<input type="text" [(ngModel)]="background_color" placeholder="background_color">
<input type="text" [(ngModel)]="font_size" placeholder="font_size">
<!---class binding for ClassName -->
<h2>Class Binding using 'className' directive</h2>
<!--call the ngclass object to add a class name to it. -->
<div [className]="'style1'"> I would be classed using classname</div>
<!---class binding for ClassName -->
<h2>Class Binding using 'className' directive and variable</h2>
<!--call the ngclass object to add a class name to it. -->
<div [className]="my_Class"> I would be classed using classname</div>
<button (click)="toggle_class()">Toggle_class</button>
<!-- class binding using ngclass -->
<h2> Class Binding using 'ngClass' directive with objects and variables</h2>
<!--call the classes in the objects and their value -->
<div [ngClass]="{'style1': isClass1Visible, 'style2': isClass2Visible}">object of classes</div>
<!--button to togggle style1 -->
<button (click)="isClass1Visible = !isClass1Visible;">Toggle style 1</button>
<!-- button to toggle style2 -->
<button (click)="isClass2Visible = !isClass2Visible;">Toggle style 2</button>
What have we added? We have added a div
with the ngClass
binding, and we have added our object of classes to them with a boolean value of false
.
We also have two values that change the boolean value from false
to true
.
So as we click, the classes change themselves.
Now, this is what our page should look like:
I hope you now understand how easy it is to play around with ngStyle
and ngClass
to make your application more reactive.
With this approach, you wouldn’t need jQuery to do things like toggle and tabs. And it is a much cleaner approach.
]]>Vue is a simple and minimal progressive JavaScript framework that can be used to build powerful web applications incrementally.
Vue is a lightweight alternative to other JavaScript frameworks like AngularJS. With an intermediate understanding of HTML, CSS, and JS, you should be ready to get up and running with Vue.
In this article, we will be building a to-do application with Vue while highlighting the bundle of awesomeness that it has to offer.
Let’s get started!
We’ll need the Vue CLI to get started. The CLI provides a means of rapidly scaffolding Single Page Applications and in no time you will have an app running with hot-reload, lint-on-save, and production-ready builds.
Vue CLI offers a zero-configuration development tool for jumpstarting your Vue apps and component.
A lot of the decisions you have to make regarding how your app scales in the future are taken care of. The Vue CLI comes with an array of templates that provide a self-sufficient, out-of-the-box ready to use package. The currently available templates are:
webpack
- A full-featured Webpack + Vue-loader setup with hot reload, linting, testing, and CSS extraction.webpack-simple
- A simple Webpack + Vue-loader setup for quick prototyping.browserify
- A full-featured Browserify + vueify setup with hot-reload, linting & unit testing.browserify-simple
- A simple Browserify + vueify setup for quick prototyping.simple
- The simplest possible Vue setup in a single HTML fileSimply put, the Vue CLI is the fastest way to get your apps up and running.
- # install vue-cli
- npm install --global vue-cli
In this tutorial, we will be focusing on the use of single file components instead of instances. We’ll also touch on how to use parent and child components and data exchange between them. Vue’s learning curve is especially gentle when you use single-file components. Additionally, they allow you to place everything regarding a component in one place. When you begin working on large applications, the ability to write reusable components will be a lifesaver.
Next, we’ll set up our Vue app with the CLI.
- # create a new project using the "webpack" template
- vue init webpack todo-app
You will be prompted to enter a project name, description, author, and Vue build. We will not install Vue-router for our app. You will also be required to enable linting and testing options or the app. You can follow my example below.
Once we have initialized our app, we will need to install the required dependencies.
- # install dependencies and go!
- cd todo-app
- npm install
To serve the app, run:
- npm run dev
This will immediately open your browser and direct you to http://localhost:8080
. The page will look as follows.
To style our application we will use Semantic. Semantic is a development framework that helps create beautiful, responsive layouts using human-friendly HTML. We will also use Sweetalert to prompt users to confirm actions. Sweetalert is a library that provides beautiful alternatives to the default JavaScript alert. Add the minified JavaScript and CSS scripts and links to your index.html
file found at the root of your folder structure.
<!-- ./index.html -->
<head>
<meta charset="utf-8">
<title>todo-app</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.2.7/semantic.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.2.7/semantic.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/sweetalert/1.1.3/sweetalert.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/sweetalert/1.1.3/sweetalert.min.js"></script>
</head>
Every Vue app, needs to have a top-level component that serves as the framework for the entire application. For our application, we will have a main component, and nested within shall be a TodoList
component. Within this, there will be Todo
sub-components.
Let’s dive into building our application. First, we’ll start with the main top-level component. The Vue CLI already generates a main component that can be found in src/App.vue
. We will build out the other necessary components.
The Vue CLI creates a component Hello
during setup that can be found in src/components/Hello.vue
. We will create our own component called TodoList.vue
and won’t be needing this anymore.
Inside of the new TodoList.vue
file, write the following.
<template>
<div>
<ul>
<li> Todo A </li>
<li> Todo B </li>
<li> Todo C </li>
</ul>
</div>
</template>
<script type="text/javascript">
export default {
};
</script>
<style>
</style>
A component file consists of three parts; template, component class, and styles sections.
The template area is the visual part of a component. Behaviour, events, and data storage for the template are handled by the class. The style section serves to further improve the appearance of the template.
To utilize the component we just created, we need to import it into our main component. Inside of src/App.vue
make the following changes just above the script section and below the template closing tag.
// add this line
import TodoList from './components/TodoList'
// remove this line
import Hello from './components/Hello'
We will also need to reference the TodoList
component in the components property and delete the previous reference to Hello
component. After the changes, our script should look like this.
<script>
import TodoList from './components/TodoList';
export default {
components: {
// Add a reference to the TodoList component in the components property
TodoList,
},
};
</script>
To render the component, we invoke it like an HTML element. Component words are separated with dashes like below instead of camel case.
<template>
<div>
// Render the TodoList component
// TodoList becomes
<todo-list></todo-list>
</div>
</template>
When we have saved our changes our rudimentary app should look like this.
We will need to supply data to the main component that will be used to display the list of todos. Our todos will have three properties; The title
, project
, and done
(to indicate if the todo is complete or not). Components provide data to their respective templates using a data
function. This function returns an object with the properties intended for the template. Let’s add some data to our component.
export default {
name: 'app',
components: {
TodoList,
},
// data function provides data to the template
data() {
return {
todos: [{
title: 'Todo A',
project: 'Project A',
done: false,
}, {
title: 'Todo B',
project: 'Project B',
done: true,
}, {
title: 'Todo C',
project: 'Project C',
done: false,
}, {
title: 'Todo D',
project: 'Project D',
done: false,
}],
};
},
};
We will need to pass data from the main component to the TodoList
component. For this, we will use the v-bind
directive. The directive takes an argument which is indicated by a colon after the directive name. Our argument will be todos which tells the v-bind
directive to bind the element’s todos attribute to the value of the expression todos.
<todo-list v-bind:todos="todos"></todo-list>
The todos will now be available in the TodoList
component as todos
. We will have to modify our TodoList
component to access this data. The TodoList
component has to declare the properties it will accept when using it. We do this by adding a property to the component class.
export default {
props: ['todos'],
}
Inside our TodoList
template let’s loop over the list of todos and also show the number of completed and uncompleted tasks. To render a list of items, we use the v-for
directive. The syntax for doing this is represented as v-for="item in items"
where items are the array with our data and item is a representation of the array element being iterated on.
<template>
<div>
// JavaScript expressions in Vue are enclosed in double curly brackets.
<p>Completed Tasks: {{todos.filter(todo => {return todo.done === true}).length}}</p>
<p>Pending Tasks: {{todos.filter(todo => {return todo.done === false}).length}}</p>
<div class='ui centered card' v-for="todo in todos">
<div class='content'>
<div class='header'>
{{ todo.title }}
</div>
<div class='meta'>
{{ todo.project }}
</div>
<div class='extra content'>
<span class='right floated edit icon'>
<i class='edit icon'></i>
</span>
</div>
</div>
<div class='ui bottom attached green basic button' v-show="todo.done">
Completed
</div>
<div class='ui bottom attached red basic button' v-show="!todo.done">
Complete
</div>
</div>
</template>
<script type="text/javascript">
export default {
props: ['todos'],
};
</script>
Let’s extract the todo template into it’s own component for cleaner code. Create a new component file Todo.vue
in src/components
and transfer the todo template. Our file should now look like this:
<template>
<div class='ui centered card'>
<div class='content'>
<div class='header'>
{{ todo.title }}
</div>
<div class='meta'>
{{ todo.project }}
</div>
<div class='extra content'>
<span class='right floated edit icon'>
<i class='edit icon'></i>
</span>
</div>
</div>
<div class='ui bottom attached green basic button' v-show="todo.done">
Completed
</div>
<div class='ui bottom attached red basic button' v-show="!todo.done">
Complete
</div>
</div>
</template>
<script type="text/javascript">
export default {
props: ['todo'],
};
</script>
In the TodoList
component refactor the code to render the Todo
component. We will also need to change the way our todos are passed to the Todo
component. We can use the v-for
attribute on any components we create just like we would in any other element. The syntax will be like this: <my-component v-for="item in items" :key="item.id"></my-component>
.
Note that from 2.2.0 and above, a key
is required when using v-for
with components.
An important thing to note is that this does not automatically pass the data to the component since components have their own isolated scopes. To pass the data, we have to use props.
<my-component v-for="(item, index) in items" v-bind:item="item" v-bind:index="index">
</my-component>
Our refactored TodoList
component template:
<template>
<div>
<p>Completed Tasks: {{todos.filter(todo => {return todo.done === true}).length}}</p>
<p>Pending Tasks: {{todos.filter(todo => {return todo.done === false}).length}}</p>
// we are now passing the data to the todo component to render the todo list
<todo v-for="todo in todos" v-bind:todo="todo"></todo>
</div>
</template>
<script type = "text/javascript" >
import Todo from './Todo';
export default {
props: ['todos'],
components: {
Todo,
},
};
</script>
Let’s add a property to the Todo
component class called isEditing
. This will be used to determine whether the Todo
is in edit mode or not. We will have an event handler on the Edit span in the template. This will trigger the showForm
method when it gets clicked. This will set the isEditing
property to true. Before we take a look at that, we will add a form and set conditionals to show the todo or the edit form depending on whether isEditing
property is true
or false
. Our template should now look like this.
<template>
<div class='ui centered card'>
// Todo shown when we are not in editing mode.
<div class="content" v-show="!isEditing">
<div class='header'>
{{ todo.title }}
</div>
<div class='meta'>
{{ todo.project }}
</div>
<div class='extra content'>
<span class='right floated edit icon' v-on:click="showForm">
<i class='edit icon'></i>
</span>
</div>
</div>
// form is visible when we are in editing mode
<div class="content" v-show="isEditing">
<div class='ui form'>
<div class='field'>
<label>Title</label>
<input type='text' v-model="todo.title" />
</div>
<div class='field'>
<label>Project</label>
<input type='text' v-model="todo.project" />
</div>
<div class='ui two button attached buttons'>
<button class='ui basic blue button' v-on:click="hideForm">
Close X
</button>
</div>
</div>
</div>
<div class='ui bottom attached green basic button' v-show="!isEditing &&todo.done" disabled>
Completed
</div>
<div class='ui bottom attached red basic button' v-show="!isEditing && !todo.done">
Pending
</div>
</div>
</template>
In addition to the showForm
method we will need to add a hideForm
method to close the form when the cancel button is clicked. Let’s see what our script now looks like.
<script>
export default {
props: ['todo'],
data() {
return {
isEditing: false,
};
},
methods: {
showForm() {
this.isEditing = true;
},
hideForm() {
this.isEditing = false;
},
},
};
</script>
Since we have bound the form values to the todo values, editing the values will immediately edit the todo. Once done, we’ll press the close button to see the updated todo.
Let’s begin by adding an icon to delete a Todo
just below the edit icon.
<template>
<span class='right floated edit icon' v-on:click="showForm">
<i class='edit icon'></i>
</span>
/* add the trash icon in below the edit icon in the template */
<span class='right floated trash icon' v-on:click="deleteTodo(todo)">
<i class='trash icon'></i>
</span>
</template>
Next, we’ll add a method to the component class to handle the icon click. This method will emit an event delete-todo
to the parent TodoList
Component and pass the current Todo
to delete. We will add an event listener to the delete icon.
<span class='right floated trash icon' v-on:click="deleteTodo(todo)">
// Todo component
methods: {
deleteTodo(todo) {
this.$emit('delete-todo', todo);
},
},
The parent component (TodoList
) will need an event handler to handle the delete. Let’s define it.
// TodoList component
methods: {
deleteTodo(todo) {
const todoIndex = this.todos.indexOf(todo);
this.todos.splice(todoIndex, 1);
},
},
The deleteTodo
method will be passed to the Todo component as follows.
// TodoList template
<todo v-on:delete-todo="deleteTodo" v-for="todo in todos" v-bind:todo="todo"></todo>\
Once we click on the delete icon, an event will be emitted and propagated to the parent component which will then delete it.
To create a new todo, we’ll start by creating a new component CreateTodo in src/components
. This will display a button with a plus sign that will turn into a form when clicked. It should look something like this.
<template>
<div class='ui basic content center aligned segment'>
<button class='ui basic button icon' v-on:click="openForm" v-show="!isCreating">
<i class='plus icon'></i>
</button>
<div class='ui centered card' v-show="isCreating">
<div class='content'>
<div class='ui form'>
<div class='field'>
<label>Title</label>
<input v-model="titleText" type='text' ref='title' defaultValue="" />
</div>
<div class='field'>
<label>Project</label>
<input type='text' ref='project' defaultValue="" />
</div>
<div class='ui two button attached buttons'>
<button class='ui basic blue button' v-on:click="sendForm()">
Create
</button>
<button class='ui basic red button' v-on:click="closeForm">
Cancel
</button>
</div>
</div>
</div>
</div>
</div>
</template>
<script>
export default {
data() {
return {
titleText: '',
projectText: '',
isCreating: false,
};
},
methods: {
openForm() {
this.isCreating = true;
},
closeForm() {
this.isCreating = false;
},
sendForm() {
if (this.titleText.length > 0 && this.projectText.length > 0) {
const title = this.titleText;
const project = this.projectText;
this.$emit('create-todo', {
title,
project,
done: false,
});
this.newTodoText = '';
}
this.isCreating = false;
},
},
};
</script>
After creating the new component, we import it and add it to the components property in the component class.
// Main Component App.vue
components: {
TodoList,
CreateTodo,
},
We’ll also add a method for creating new todos.
// App.vue
methods: {
addTodo(title) {
this.todos.push({
title,
done: false,
});
},
},
The CreateTodo component will be invoked in the App.vue template as follows:
<create-todo v-on:add-todo="addTodo">
Finally, we’ll add a method completeTodo
to the Todo
component that emits an event complete-todo
to the parent component when the pending button is clicked and sets the done status of the todo
to true
.
// Todo component
methods: {
completeTodo(todo) {
this.$emit('complete-todo', todo);
},
}
An event handler will be added to the TodoList
component to process the event.
methods: {
completeTodo(todo) {
const todoIndex = this.todos.indexOf(todo);
this.todos[todoIndex].done = true;
},
},
To pass the TodoList
method to the Todo
component we will add it to the Todo
component invocation.
<todo v-on:delete-todo="deleteTodo" v-on:complete-todo="completeTodo" v-for="todo in todos" :todo.sync="todo"></todo>
We have learned how to initialize a Vue app using the Vue CLI. In addition, we learned about component structure, adding data to components, event listeners, and event handlers. We saw how to create a todo, edit it and delete it. There is a lot more to learn. We used static data in our main component. The next step is to retrieve the data from a server and update it accordingly. We are now prepared to create an interactive Vue application. Try something else on your own and see how it goes. Cheers!
]]>Bootstrap has a great many features. One of the main features that is used in pretty much every Bootstrap project I’ve ever seen is the grid system. The grid system provides a great tool to scaffold and build out any number of projects.
Today, we’ll be looking at a lesser-known grid feature, the grid column ordering classes. It is a simple feature, understated on their docs, but very powerful in the right scenarios.
Column ordering classes allow us to change the order of our grid system based on different browser sizes. This means that on a large screen, you can have a different grid than on a mobile screen.
For example, let’s say we have 3 columns (A, B, and C) on large screens. B will be the most prominent item we have. Right in the middle, front, and center.
A B C
On mobile devices, this grid will collapse to be A on top of B on top of C. We want B to take higher precedence, though, since it’s our most important element. We want it to be placed on the very top.
This is what we want for mobile devices:
B
A
C
How can we achieve this? Bootstrap provides a great way to handle this scenario, the push and pull column classes.
The classes are used along with the Bootstrap grid classes of .col-xs-#
, .col-sm-#
, .col-md-#
, and .col-lg-#
.
The push and pull classes applied to the Bootstrap grid are .col-xs-push-#
or .col-xs-pull-#
. That also works with sm
, md
, and lg
.
The push
class will move columns to the right while the pull
class will move columns to the left.
Now that we know what the classes are, let’s take our above example and turn that into working HTML and CSS. We will need to create the 3 different sections for large screens. This is easy enough. The code will look like:
<div class="row">
<div class="col-md-4">
<div class="alert alert-info">A</div>
</div>
<div class="col-md-4">
<div class="alert alert-danger">B</div>
</div>
<div class="col-md-4">
<div class="alert alert-info">C</div>
</div>
</div>
This gives us:
A B C
We now have our grid for medium to large devices (desktops). Now, this is where we will have to add in the push
and pull
classes to accommodate the different order for mobile. Now we could add the push and pull classes here, but we have to make a quick adjustment first.
We must rearrange our HTML content to accommodate the B being above all the other content. This will move things for us so that we take more of the mobile first approach that is built into the Bootstrap grid. By rearranging our content, we now have:
<div class="row">
<div class="col-md-4 col-md-push-4">
<div class="alert alert-danger">B</div>
</div>
<div class="col-md-4 col-md-pull-4 col-sm-6">
<div class="alert alert-info">A</div>
</div>
<div class="col-md-4 col-sm-6">
<div class="alert alert-info">C</div>
</div>
</div>
The grid is much easier to see this way since we now just have to add push and pull classes for medium devices and above. Our grid will now behave the way we expect! Go ahead and resize your browser and see how our grid works:
B
A
C
Take the Bootstrap approach and create your content mobile-first. It is easier to push and pull columns on larger devices than it is on smaller devices. This is why you should focus on your mobile ordering first, and then move up.
Here are a few more examples:
See the Pen Column Reordering in Bootstrap by Chris Sevilleja (@sevilayha) on CodePen.
With this simple technique, we are able to rearrange columns without too much fuss. I’ve seen some developers use hide=and show classes to show different grids on small to large devices but these column reordering classes take care of all that for us.
For more on Bootstrap, take a look at our previous tutorials: Understanding the Bootstrap 3 Grid System and Bootstrap 3 Tips and Tricks You Might Not Know.
]]>AngularJS provides a great way to make single-page applications. When creating single-page applications, routing will be very important. We want our navigation to feel like a normal site and still not have our site refresh. We’ve already gone through Angular routing using the normal ngRoute method.
Today we’ll be looking at routing using UI-Router.
The UI-Router is a routing framework for AngularJS built by the AngularUI team. It provides a different approach than ngRoute in that it changes your application views based on state of the application and not just the route URL.
With this approach, your views and routes aren’t tied down to the site URL. This way, you can change the parts of your site using your routing even if the URL does not change.
When using ngRoute
, you’d have to use ngInclude
or other methods and this could get confusing. Now that all of your states, routing, and views are handled in your one .config()
, this would help when using a top-down view of your application.
Let’s do something similar to the other routing tutorial we made.
View Demonstration of Angular routing and templating on plnkr
Let’s create a Home and About page.
Let’s get our application started. We will need a few files:
- index.html // will hold the main template for our app
- app.js // our angular code
- partial-about.html // about page code
- partial-home.html // home page code
- partial-home-list.html // injected into the home page
- table-data.html // re-usable table that we can place anywhere
With our application structure figured out, let’s fill out some files.
<!DOCTYPE html>
<html>
<head>
<!-- CSS (load bootstrap) -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
<style>
.navbar { border-radius:0; }
</style>
<!-- JS (load angular, ui-router, and our custom js file) -->
<script src="http://code.angularjs.org/1.2.13/angular.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-router/0.2.8/angular-ui-router.min.js"></script>
<script src="app.js"></script>
</head>
<!-- apply our angular app to our site -->
<body ng-app="routerApp">
<!-- NAVIGATION -->
<nav class="navbar navbar-inverse" role="navigation">
<div class="navbar-header">
<a class="navbar-brand" ui-sref="#">AngularUI Router</a>
</div>
<ul class="nav navbar-nav">
<li><a ui-sref="home">Home</a></li>
<li><a ui-sref="about">About</a></li>
</ul>
</nav>
<!-- MAIN CONTENT -->
<div class="container">
<!-- THIS IS WHERE WE WILL INJECT OUR CONTENT ============================== -->
<div ui-view></div>
</div>
</body>
</html>
There’s our HTML file. We will use Bootstrap to help with our styling. Notice that we also load up ui-router
in addition to loading Angular. UI Router is separate from the Angular core, just like ngRoute is separate.
When creating a link with UI-Router, you will use ui-sref
. The href
will be generated from this and you want this to point to a certain state of your application. These are created in your app.js
.
We also use <div ui-view></div>
instead of ngRoute’s <div ng-view></div>
.
Let’s start up our Angular application now in app.js
.
var routerApp = angular.module('routerApp', ['ui.router']);
routerApp.config(function($stateProvider, $urlRouterProvider) {
$urlRouterProvider.otherwise('/home');
$stateProvider
// HOME STATES AND NESTED VIEWS ========================================
.state('home', {
url: '/home',
templateUrl: 'partial-home.html'
})
// ABOUT PAGE AND MULTIPLE NAMED VIEWS =================================
.state('about', {
// we'll get to this in a bit
});
});
Now we have created the routerApp
that we already applied to our body
in the index.html
file.
Here we have a .state()
for home and for about. In home, we are using the template file partial-home.html
.
Let’s fill out our partial-home.html
page so we can actually see the information.
<div class="jumbotron text-center">
<h1>The Home Page</h1>
<p>This page demonstrates <span class="text-danger">nested</span> views.</p>
</div>
Now we have our site! It doesn’t do much, but we have it.
Let’s look at how we can nest views. We’ll add two buttons to our home page and from there, we will want to show off different information based on what is clicked.
We’re going to add our buttons to partial-home.html
and then go into our Angular file and see how we can change it to add nested views.
<div class="jumbotron text-center">
<h1>The Home Page</h1>
<p>This page demonstrates <span class="text-danger">nested</span> views.</p>
<a ui-sref=".list" class="btn btn-primary">List</a>
<a ui-sref=".paragraph" class="btn btn-danger">Paragraph</a>
</div>
<div ui-view></div>
When linking to a nested view, we are going to use dot denotation: ui-sref=".list"
and ui-sref=".paragraph"
. These will be defined in our Angular file and once we set it up there, we will inject it into our new <div ui-view></div>
.
In our app.js
file, let’s create those nested states.
...
$stateProvider
// HOME STATES AND NESTED VIEWS ========================================
.state('home', {
url: '/home',
templateUrl: 'partial-home.html'
})
// nested list with custom controller
.state('home.list', {
url: '/list',
templateUrl: 'partial-home-list.html',
controller: function($scope) {
$scope.dogs = ['Bernese', 'Husky', 'Goldendoodle'];
}
})
// nested list with just some random string data
.state('home.paragraph', {
url: '/paragraph',
template: 'Random String Data'
})
...
Now the ui-sref
we defined in home.html
are linked to an actual state. With home.list
and home.paragraph
created, those links will now take the template provided and inject it into ui-view
.
The last thing we need to do for the home page is define the partial-home-list.html
file. We have also passed in a controller with a list of dogs that we will use in the template file.
<ul>
<li ng-repeat="dog in dogs">{{ dog }}</li>
</ul>
Now when we click List, it will inject our list of dogs into the template. Or if we click Paragraph, it will inject the string we gave.
You can see how easy it is to change different parts of our application based on the state. We didn’t have to do any sort of work with ngInclude
, ngShow
, ngHide
, or ngIf
. This keeps our view files cleaner since all the work is in our app.js
.
Let’s move on and see how we can have multiple views at once.
Having multiple views in your application can be very powerful. Maybe you have a sidebar on your site that has things like Popular Posts, Recent Posts, Users, or whatever. These can all be separated out and injected into our template. Each will have its own controller and template file so our app stays clean.
Having our application modular like this also lets us reuse data in different templates.
For our About page, let’s make two columns and have each have its own data. We will handle the view first and then look at how we can do this using UI-Router.
<div class="jumbotron text-center">
<h1>The About Page</h1>
<p>This page demonstrates <span class="text-danger">multiple</span> and <span class="text-danger">named</span> views.</p>
</div>
<div class="row">
<!-- COLUMN ONE NAMED VIEW -->
<div class="col-sm-6">
<div ui-view="columnOne"></div>
</div>
<!-- COLUMN TWO NAMED VIEW -->
<div class="col-sm-6">
<div ui-view="columnTwo"></div>
</div>
</div>
There we have multiple views. One is named columnOne
and the other is columnTwo
.
Why would somebody use this approach? That’s a good question. Are we creating an application that is too modularized and that could get confusing? Taken from the official UI-Router docs, here is a solid example of why you would have multiple named views. In their example, they show off different parts of an application. Each part has its own data, so having each with its own controllers and template files makes building something like this easy.
Now that our view is all created, let’s look at how we can apply template files and controllers to each view. We’ll go back to our app.js
.
...
.state('about', {
url: '/about',
views: {
// the main template will be placed here (relatively named)
'': { templateUrl: 'partial-about.html' },
// the child views will be defined here (absolutely named)
'columnOne@about': { template: 'Column' },
// for column two, we'll define a separate controller
'columnTwo@about': {
templateUrl: 'table-data.html',
controller: 'scotchController'
}
}
});
}); // closes $routerApp.config()
// let's define the scotch controller that we call up in the about state
routerApp.controller('scotchController', function($scope) {
$scope.message = 'test';
$scope.scotches = [
{
name: 'Macallan 12',
price: 50
},
{
name: 'Chivas Regal Royal Salute',
price: 10000
},
{
name: 'Glenfiddich 1937',
price: 20000
}
];
});
...
Just like that, our About page is ready to go. Now it may be confusing how we nested everything in the views
for the about state. Why not define a templateUrl
for the main page and then define the columns in a nested view object? The reason for this gives us a really great tool.
UI-Router assigns every view to an absolute name. The structure for this is viewName@stateName
. Since our main ui-view
inside our about state, we gave it a blank name. The other two views are columnOne@about
and columnTwo@about
.
Having the naming scheme this way lets us define multiple views inside a single state. The docs explain this concept very well and I’d encourage taking a look at their examples. Extremely powerful tools there.
This is an overview of the great tool that is UI-Router. The things you can do with it are incredible and when you look at your application as states instead of going the ngRoute
option, Angular applications can easily be created to be modular and extensible.
Ever since its announcement by Google, the adoption of progressive web apps has skyrocketed as many traditional web apps have been and are being converted to progressive web apps. In this tutorial, I’ll be showing you how to build a progressive web app with Nuxt.js. For the purpose of the demonstration, we’ll be building a news app.
This tutorial assumes a basic knowledge of progressive web app.
Nuxt.js is a framework for building server-side rendered Vue.js applications.
We’ll start by creating a new Nuxt.js app. For this, we’ll make use of the Vue CLI, so you need to first install the Vue CLI in case you don’t have it installed already:
- npm install -g vue-cli
Then we can create a Nuxt.js app:
- vue init nuxt/starter pwa-news
Next, we need to install the dependencies:
- cd pwa-news
- npm install
We can now launch our app:
- npm run dev
The app should be running on http://localhost:3000
.
With our app up and running, let’s now install the necessary Nuxt.js modules that we’ll be needing for our news app:
- npm install @nuxtjs/axios @nuxtjs/bulma @nuxtjs/dotenv
Let’s quickly go over each of the module:
.env
file into your context options.Next, let’s make Nuxt.js use these modules. We’ll do that by adding them in the modules
section of the nuxt.config.js
file:
modules: [
'@nuxtjs/axios',
'@nuxtjs/dotenv',
'@nuxtjs/bulma'
]
Our news app will be built on News API. So we need to get our API key.
Click on the Get API key button then follow along with the registration to get your API key.
With our API key in place, let’s start building our news app. First, let’s update the layouts/default.vue
file as below:
<template>
<div>
<section class="hero has-text-centered is-primary">
<div class="hero-body">
<div class="container">
<h1 class="title">PWA News</h1>
<h2 class="subtitle">All the headlines making the wavy!</h2>
</div>
</div>
</section>
<nuxt/>
</div>
</template>
<style>
html {
font-family: 'Source Sans Pro', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
}
</style>
We are simply making use of the Bulma classes.
Next, let’s update the pages/index.vue
file as well:
<template>
<section class="section">
<div class="container">
<div class="columns is-multiline">
<div
class="column is-one-quarter"
v-for="(article, index) in articles"
:key="index"
>
<a :href="article.url" target="_blank">
<div class="card">
<div class="card-image">
<figure class="image is-3by2">
<img :src="article.urlToImage" :alt="article.title">
</figure>
</div>
<div class="card-content">
<div class="content">{{ article.title }}</div>
</div>
</div>
</a>
</div>
</div>
</div>
</section>
</template>
<script>
export default {
async asyncData({ app }) {
const { articles } = await app.$axios.$get(
`https://newsapi.org/v2/top-headlines?sources=cnn&apiKey=${
process.env.API_KEY
}`
);
return { articles };
},
};
</script>
In the template
section, we loop through the news headlines and display each headline in a card (using Bulma classes) with a link to view the news directly on the source. On the script
section, because we’ll be fetching the news headlines and rendering them on the server-side, so we make use of the asyncData
method. Then making use of the Nuxt.js axios module installed earlier, we make a GET
request to the New API endpoint to fetch news headlines, passing along the source we want to fetch from and our API key. Lastly, we return a articles
object containing the fetched news headlines. With this, we can access the articles
object as we would access any other component data
.
You will notice we are getting our API key from an environment variable, which we are yet to create. Let’s do that now. Create a new .env
file directly in the project root directory:
- touch .env
Then add the code below into it:
API_KEY=YOUR_API_KEY
Now, if we test our app, we should get something similar to the image below:
So far our news is feature complete. But our goal for this tutorial is to build a progressive web app. So, let’s add the progressive web app awesomeness to our news app. To do this, we make use of a Nuxt.js module called Nuxt PWA.
Using Nuxt PWA you can supercharge your current or next Nuxt project with a heavily tested, updated, and stable PWA solution and zero-config!
Nuxt PWA module is a collection of smaller modules that are designed to magically work out of the box together. These modules includes:
manifest.json
file.For the purpose of this tutorial, we’ll be making use of only the first 4 modules, as we won’t be covering push notifications.
The lovely thing about Nuxt PWA is that it works straight out of the box with zero configuration. So let’s install and set it up:
- npm install @nuxtjs/pwa
Next, we add the module to the nuxt.config.js
file:
modules: [
...,
'@nuxtjs/pwa'
]
Lastly, we need an icon for our news app. Though this is optional, it will give our app that native feel once our app is added to the home screen of our user’s devices. For this tutorial, we’ll use the icon from the Nuxt.js HackerNews clone. So download and place it in the static
directory.
Note: It is recommended that the icon be a square png and >= 512x512px
and named icon.png
.
That’s all we need to do to make use of the Nuxt PWA module. Because by default, the workbox module is only enabled on production
builds, we need to build our news app for production:
- npm run build
Once the build is done, we can start our app with:
- npm start
Then we can view the app at http://localhost:3000
.
We get pretty much the same thing from earlier. To see indeed that our app is now a progressive web app, try setting our app to offline (which simulates no internet connection) under the Application tab on Chrome DevTools and refresh the page.
The app should still be running fine as our app is been cached for offline view.
Also, we can use the Lighthouse extension to test if our app meets the standards for a progressive web app. Under the Audits tab (you might have to download the Lighthouse extension if you can’t find this tab) of Chrome DevTools, click on the perform an audit… button and click on Run audit:
This will start performing some checks on our app. Once it’s complete, we should get a screen as below:
We are more focused on the Progressive Web App section. As you can see, our app got an 82/100, which is a great score. If we narrow the result down, you’ll see that our app only failed 2 audits, which is understandable since our app is still running on localhost.
So in this tutorial, we looked at how we can build a progressive web app with Nuxt.js. With this tutorial, you should be able to convert an existing Nuxt.js application into a progressive web app. We also looked at how we can test a progressive web app using the Lighthouse extension. I hope you found this tutorial helpful.
]]>With the 3rd version of the great Bootstrap out for about 4 and a half months now, people have had their time to play around with it, learn the changes, find new features, and build amazing things.
The most interesting change for me was the difference in the grid system. Bootstrap 2 catered to two different browser sizes (desktop and then mobile). With Bootstrap 3, you now build with mobile in mind first, and the grid system lets you create different grid systems based on browser size.
The grid you create works on desktops and then stacks on top of each other when the browser size is below 767px. This is limited since you can only define 1 grid on desktop-sized browsers. You are left with a stacked grid on mobile devices.
The new Bootstrap grid system applies to mobile-first. When you declare a specific grid size, that is the grid for that size and above. This can be a little hard to grasp at first so here’s an example.
For example, let’s say you want a site that has:
Since the grid system now cascades up from mobile devices, this is how this code will look.
<div class="row">
<div class="col-sm-6 col-lg-3">
This is part of our grid.
</div>
<div class="col-sm-6 col-lg-3">
This is part of our grid.
</div>
<div class="col-sm-6 col-lg-3">
This is part of our grid.
</div>
<div class="col-sm-6 col-lg-3">
This is part of our grid.
</div>
</div>
We don’t have to define anything for extra small devices since the default is one column. We have to define a grid size for small devices, but not for medium devices. This is because the grid cascades up. So if you define a size at sm
, then it will be that grid size for sm
, md
, and lg
.
We’ll explain the different grid sizes and how you create them and then show examples.
This is the best part about the new grid system. You could realistically have your site show a different grid on 4 different browser sizes. Below is the breakdown of the different sizes.
Name | Target Widths | Code |
---|---|---|
Extra Small | Phones Less than 768px | .col-xs-$ |
Small Devices | Tablets 768px and Up | .col-sm-$ |
Medium Devices | Desktops 992px and Up | .col-md-$ |
Large Devices | Large Desktops 1200px and Up | .col-lg-$ |
The official Bootstrap docs offer a much more comprehensive understanding of how the grid works. Take a look at those to get a more solid overview of column sizes, gutter sizes, maximum column sizes, and the max-width of your overall site based on the browser size.
Sometimes you will need to use media queries to get your site to act the way you’d like it to. Knowing the default grid sizes is essential to extending the Bootstrap grid. We’ve written up a quick tip to show you the default sizes so take a look if you need the Bootstrap media queries and breakpoints.
Bootstrap Media Queries and Breakpoints
Just like Bootstrap 2, Bootstrap 3 provides responsive utilities for hiding and showing elements based on the browser size. This will also help us in defining our grid system.
.visible-xs
.visible-sm
.visible-md
.visible-lg
.hidden-xs
.hidden-sm
.hidden-md
.hidden-lg
This helps because we are able to show certain elements based on size. In our examples today, we’ll be showing an extra sidebar on large desktops.
Here are a few examples of grids that you can create. We’ll go through some basic sites that some people might want and show how easy it is to build that site with the Bootstrap 3 grid.
Resize your browser’s width to see the different grids in action.
Let’s say you wanted a site to have 1 column on extra small (phone) and small (tablet) devices, 2 columns on medium (medium desktop) devices, and 4 columns on large (desktop) devices.
Here is the code for that example:
<div class="row">
<div class="col-md-6 col-lg-3">
<div class="visible-lg text-success">Large Devices!</div>
<div class="visible-md text-warning">Medium Devices!</div>
<div class="visible-xs visible-sm text-danger">Extra Small and Small Devices</div>
</div>
<div class="col-md-6 col-lg-3">
<div class="visible-lg text-success">Large Devices!</div>
<div class="visible-md text-warning">Medium Devices!</div>
<div class="visible-xs visible-sm text-danger">Extra Small and Small Devices</div>
</div>
<div class="col-md-6 col-lg-3">
<div class="visible-lg text-success">Large Devices!</div>
<div class="visible-md text-warning">Medium Devices!</div>
<div class="visible-xs visible-sm text-danger">Extra Small and Small Devices</div>
</div>
<div class="col-md-6 col-lg-3">
<div class="visible-lg text-success">Large Devices!</div>
<div class="visible-md text-warning">Medium Devices!</div>
<div class="visible-xs visible-sm text-danger">Extra Small and Small Devices</div>
</div>
</div>
This is an interesting example and one that the new grid excels at. Let’s say you have a site that has a sidebar and a main content section. For extra small devices, you want one column, main content with the sidebar stacked below it. For small and medium devices, we want the sidebar and main content to sit side by side. Now for large devices, we want to utilize the space on larger devices. We want to add an extra sidebar to show more content.
We change the size of the main content to span 6 columns on large devices to make room for our second sidebar. This is a great way to utilize the space on larger desktops. And here is the code for that example.
<div class="row">
<div class="col-sm-9 col-lg-6 text-danger">
I am the main content.
</div>
<div class="col-sm-3 text-warning">
I am the main sidebar.
</div>
<div class="col-lg-3 visible-lg text-success">
I am the secondary sidebar that only shows up on LARGE devices.
</div>
</div>
This will be a more complex example. Let’s say that at no point in our grid system do we want all of our columns to stack. For extra small devices, we want 2 columns. For small devices, we want 3 columns. For medium devices, we want 4 columns. For large devices, we want 6 columns (one that only shows on large devices).
You get the drill now. Let’s just straight into the example and code.
<div class="row">
<div class="col-xs-6 col-sm-4 col-md-3 col-lg-2">
I'm content!
</div>
<div class="col-xs-6 col-sm-4 col-md-3 col-lg-2">
I'm content!
</div>
<div class="col-xs-6 col-sm-4 col-md-3 col-lg-2">
I'm content!
</div>
<div class="col-xs-6 col-sm-4 col-md-3 col-lg-2">
I'm content!
</div>
<div class="col-xs-6 col-sm-4 col-md-3 col-lg-2">
I'm content!
</div>
<div class="col-xs-6 col-sm-4 col-md-3 col-lg-2 visible-lg">
I'm content only visible on large devices!
</div>
</div>
You can see that as the browser size gets smaller, the columns start to form. Also, the content inside each will begin stacking.
You can see how easy it is to build complex and dynamic sites with the Bootstrap 3 grid. From mobile 2 column sites to complex hiding and showing elements on large desktops, you can build any type of site. Hopefully, these examples will give you an idea of the flexibility of the new grid system and all the great things you can create.
]]>In this tutorial we will explore the way to bind these few types of controls to our form: text, number, radio, select (primitive type), select (object), multiple select, checkbox (boolean), and checkbox (toggle value).
Feel free to skip some of the control types (as some of them are really simple).
If you are new to Angular 2 forms, do refer to these articles for basics.
View Angular 2 - Different form controls (final) scotch on plnkr
We will build a form to capture user information based on these interfaces.
export interface User {
name: string; // text
age?: number; // number
language?: string; // radio
role?: string; // select (primitive)
theme?: Theme; // select (object)
topics?: string[]; // multiple select
isActive?: boolean; // checkbox
toggle?: string; // checkbox toggle either 'toggled' or 'untoggled'
}
export interface Theme {
display: string;
backgroundColor: string;
fontColor: string;
}
Here is how the UI will look:
Here’s our file structure:
|- app/
|- app.component.html
|- app.component.ts
|- app.module.ts
|- main.ts
|- theme.interface.ts
|- user.interface.ts
|- index.html
|- styles.css
|- tsconfig.json
In order to use the forms module, we need to npm install @angular/forms
npm package and import the forms module in the application module.
- npm install @angular/forms --save
Here’s the module for our application app.module.ts
:
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { FormsModule } from '@angular/forms';
import { AppComponent } from './app.component';
@NgModule({
imports: [ BrowserModule, FormsModule ], // import forms module
declarations: [ AppComponent ],
bootstrap: [ AppComponent ]
})
export class AppModule { }
Let’s move on to create our app component.
import { Component, OnInit } from '@angular/core';
import { User } from './user.interface';
import { Theme } from './theme.interface';
@Component({
moduleId: module.id,
selector: 'my-app',
templateUrl: 'app.component.html',
directives: []
})
export class AppComponent implements OnInit {
public user: User;
/* standing data goes here*/
...
/* end standing data */
ngOnInit() {
// initialize user model here
}
public save(isValid: boolean, f: User) {
console.log(f);
}
}
We need to include some data for setup as well:
...
/* standing data goes here*/
public languages = [
{ value: 'en', display: 'English' },
{ value: 'es', display: 'Spanish' }
];
public roles = [
{ value: 'admin', display: 'Administrator' },
{ value: 'guest', display: 'Guest' },
{ value: 'custom', display: 'Custom' }
];
public themes: Theme[] = [
{ backgroundColor: 'black', fontColor: 'white', display: 'Dark' },
{ backgroundColor: 'white', fontColor: 'black', display: 'Light' },
{ backgroundColor: 'grey', fontColor: 'white', display: 'Sleek' }
];
public topics = [
{ value: 'game', display: 'Gaming' },
{ value: 'tech', display: 'Technology' },
{ value: 'life', display: 'Lifestyle' },
];
public toggles = [
{ value: 'toggled', display: 'Toggled' },
{ value: 'untoggled', display: 'UnToggled' },
];
/* end standing data */
...
Then, we need to initialize our user model:
...
ngOnInit() {
// initialize user model here
this.user = {
name: '',
language: this.languages[0].value, // default to English
role: null,
theme: this.themes[0], // default to dark theme
isActive: false,
toggle: this.toggles[1].value, // default to untoggled
topics: [this.topics[1].value] // default to Technology
}
}
...
This is how our HTML view will look like.
<form #f="ngForm" novalidate>
<!-- We'll add our form controls here -->
<button type="submit" (click)="save(f.value, f.valid)">Submit</button>
</form>
Let’s start to look into each type of control.
Getting text input is very straightforward. You need the name
attribute, and ngModel
.
...
<div>
<label>Name</label>
<input type="text" name="name" [(ngModel)]="user.name">
</div>
...
Getting number input is also very straightforward.
...
<div>
<label>Age</label>
<input type="number" name="age" [(ngModel)]="user.age">
</div>
...
Binding radio input is not that easy prior Angular RC 2. With the new form in RC 3 onward, we can directly bind to ngModel
, bind the value
property.
We have this list of preferred languages:
public languages = [
{ value: 'en', display: 'English' },
{ value: 'es', display: 'Spanish' }
];
When select, we want only the value Hammerhead or Great White Shark.
...
<div>
<label>Language</label>
<div *ngFor="let language of languages">
<label>
<input type="radio" name="language" [(ngModel)]="user.language"
[value]="language.value">
{{language.display}}
</label>
</div>
</div>
...
You can bind select to ngModel
. Loop through your option
list, set the value
property.
We have a list of roles:
public roles = [
{ value: 'admin', display: 'Administrator' },
{ value: 'guest', display: 'Guest' },
{ value: 'custom', display: 'Custom' }
];
When value selected, we expected it to return string value admin, guest, or custom. Here’s how your HTML will look like.
...
<div>
<label>Role</label>
<select name="role" [(ngModel)]="user.role">
<option *ngFor="let role of roles" [value]="role.value">
{{role.display}}
</option>
</select>
</div>
...
Similar to the last example, but this time, instead of a simple type, we want the whole object when it’s selected.
Here is the list of themes:
public themes: Theme[] = [
{ backgroundColor: 'black', fontColor: 'white', display: 'Dark' },
{ backgroundColor: 'white', fontColor: 'black', display: 'Light' },
{ backgroundColor: 'grey', fontColor: 'white', display: 'Sleek' }
];
When selected, for example Light theme, we expect { backgroundColor: 'white', fontColor: 'black', display: 'Light' }
to be returned. Instead of binding to the value property, we bind to ngValue
property.
...
<div>
<label>Theme</label>
<select name="theme" [(ngModel)]="user.theme">
<option *ngFor="let theme of themes" [ngValue]="theme">
{{theme.display}}
</option>
</select>
</div>
...
We can select more than 1 topics. E.g. when selecting game and tech, it should return ['game', 'tech']
.
public topics = [
{ value: 'game', display: 'Gaming' },
{ value: 'tech', display: 'Technology' },
{ value: 'life', display: 'Lifestyle' },
];
Similar to select, but this time our model is array of string.
...
<div>
<label>Topics</label>
<select multiple name="topics" [(ngModel)]="user.topics">
<option *ngFor="let topic of topics" [value]="topic.value">
{{topic.display}}
</option>
</select>
</div>
...
By default, checkboxes return boolean. Bind ngModel
and define the name attribute as usual.
...
<div>
<label>
<input type="checkbox" name="isActive" [(ngModel)]="user.isActive">
Is Active
</label>
</div>
...
In this case, we want to display a checkbox. Instead of boolean, we expecting value. When checked, it should return toggled, else return value untoggled.
This is the list of toggles:
public toggles = [
{ value: 'toggled', display: 'Toggled' },
{ value: 'untoggled', display: 'UnToggled' },
];
First, we define a hidden input to bind to the real model. Then we create the checkbox input, handle the checked
property and change
event. Change event fire every time value change, and it has $event
that we can read from.
In our case, we read the $event.target.checked
to find out if the checkbox is checked, then update model value accordingly.
...
<div>
<input type="hidden" name="toggle" [(ngModel)]="user.toggle">
<div>
<label>
<input type="checkbox"
[checked]="user.toggle === toggles[0].value"
(change)="$event.target.checked? (user.toggle = toggles[0].value) : (user.toggle = toggles[1].value)">
{{ toggles[0].display }}
</label>
</div>
</div>
...
During development, it’s good that you can visualize the value. Angular provided a very useful json
Pipe.
...
<pre>{{your_form or control_name | json }}</pre>
...
That’s it. Hope it helps your journey in Angular 2. Happy coding!
]]>A regular expression is simply a sequence of characters that define a pattern.
When you want to match a string to perhaps validate an email or password, or even extract some data, a regex is an indispensable tool.
Everything in regex is a character. Even (an empty space character (
).
While Unicode characters can be used to match any international text, most patterns use normal ASCII (letters, digits, punctuation, and keyboard symbols like $@%#!.
)
Regular expressions are everywhere. Here are some of the reasons why you should learn them:
Are there any real-world applications?
Common applications of regex are:
Also, regex is used for text matching in spreadsheets, text editors, IDEs, and Google Analytics.
We are going to use Python to write some regex. Python is known for its readability so it makes it easier to implement them.
In Python, the re module provides full support for regular expressions.
A GitHub repo contains code and concepts we’ll use here.
Python uses raw string notations to write regular expressions – r"write-expression-here"
First, we’ll import the re
module. Then write out the regex pattern.
import re
pattern = re.compile(r"")
The purpose of the compile
method is to compile the regex pattern which will be used for matching later.
It’s advisable to compile regex when it’ll be used several times in your program. Resaving the resulting regular expression object for reuse, which re.compile
does, is more efficient.
To add some regular expression inside the raw string notation, we’ll put some special sequences to make our work easier.
They are simply a sequence of characters that have a backslash character (\
).
For instance:
\d
is a match for one digit [0-9]\w
is a match for one alphanumeric character.This means any ASCII character that’s either a letter or a number: [a-z A-Z 0-9]
It’s important to know them since they help us write simpler and shorter regex.
Here’s a table with more special sequences
Element | Description |
---|---|
. |
This element matches any character except \n |
\d |
This matches any digit [0-9] |
\D |
This matches non-digit characters [^0-9] |
\s |
This matches whitespace character [ \t\n\r\f\v] |
\S |
This matches non-whitespace character [^ \t\n\r\f\v] |
\w |
This matches alphanumeric character [a-zA-Z0-9_] |
\W |
This matches any non-alphanumeric character [^a-zA-Z0-9] |
Points to note:
[0-9]
is the same as [0123456789]
\d
is short for [0-9]
\w
is short for [a-zA-Z0-9]
[7-9]
is the same as [789]
Having learned something about special sequences, let’s continue with our coding. Write down and run the code below.
import re
pattern = re.compile(r"\w")
# Let's feed in some strings to match
string = "regex is awesome!"
# Then call a matching method to match our pattern
result = pattern.match(string)
print result.group() # will print out 'r'
The match method returns a match object, or None
if no match was found.
We are printing a result.group()
. The group()
is a match object method that returns an entire match. If not, it returns a NoneType
, which means there was no match to our compiled pattern.
You may wonder why the output is only a letter and not the whole word. It’s simply because \w
sequence matches only the first letter or digit at the start of the string.
We’ve just wrote our first regex program!
We want to do more than simply matching a single letter. So we amend our code to look like this:
# Replace the pattern variable with this
pattern = re.compile(r"\w+") # Notice the plus sign we just added
The plus symbol (+
) on our second pattern is what we call a quantifier.
Quantifiers simply specify the number of characters to match.
Here are some other regex quantifiers and how to use them.
Quantifier | Description | Example | Sample match |
---|---|---|---|
+ |
one or more | \w+ |
ABCDEF097 |
{2} |
exactly 2 times | \d{2} |
01 |
{1,} |
one or more times | \w{1,} |
smiling |
{2,4} |
2, 3 or 4 times | \w{2,4} |
1234 |
* |
0 or more times | A*B |
AAAAB |
? |
once or none (lazy) | \d+? |
1 in 12345 |
Let’s write some more quantifiers in our program!
import re
def regex(string):
"""This function returns at least one matching digit."""
pattern = re.compile(r"\d{1,}") # For brevity, this is the same as r"\d+"
result = pattern.match(string)
if result:
return result.group()
return None
# Call our function, passing in our string
regex("007 James Bond")
The above regex uses a quantifier to match at least one digit.
Calling the function will print this output: '007'
^
and $
?You may have noticed that a regex usually has the caret (^
) and dollar sign ($
) characters. For example, r"^\w+$"
.
Here’s why.
^
and $
are boundaries or anchors. ^
marks the start, while $
marks the end of a regular expression.
However, when used in square brackets [^ … ] it means not
. For example, [^\s$]
or just [^\s]
will tell regex to match anything that is not a whitespace character.
Let’s write some code to prove this
import re
line = "dance more"
result = re.match(r"[^\d+]", line)
print result # Prints out 'dance'
First, notice there’s no re.compile
this time. Programs that use only a few regular expressions at a time don’t have to compile a regex. We, therefore, don’t need re.compile
for this.
Next, re.match()
takes in an optional string argument as well, so we fed it with the line
variable.
Moving on swiftly!
Let’s look at a new concept: search.
The match method checks for a match only at the beginning of the string, while a re.search()
checks for a match anywhere in the string.
Let’s write some search functionality.
import re
string = "\n dreamer"
result = re.search(r"\w+", string, re.MULTILINE)
print result.group() # Prints out 'dreamer'
The search
method, like the match method, can also take an extra argument.
The re.MULTILINE
simply tells our method to search on multiple lines that have been separated by the new line space character if any.
Let’s take a look at another example of how search
works:
import re
pattern = re.compile(r"^<html>")
result = pattern.search("<html></html>")
print result.group()
This will print out <html>
.
The re.split()
splits a string into a list delimited by the passed pattern.
For example, consider having names read from a file that we want to put in a list:
text = "Alpha
Beta
Gamma
Delta"
We can use split to read each line and split them into an array as such:
import re
results = re.split(r"\n+", text)
print results # will print: ['Alpha', 'Beta', 'Gamma', 'Delta']
But what if we wanted to find all instances of words in a string?
Enter re.findall
.
re.findall()
finds all the matches of all occurrences of a pattern, not just the first one as re.search()
does. Unlike search which returns a match object, findall
returns a list of matches.
Let’s write and run this functionality.
import re
def finder(string):
"""This function finds all the words in a given string."""
result_list = re.findall(r"\w+", string)
return result_list
# Call finder function, passing in the string argument
finder("finding dory")
The output will be a list: ['finding', 'dory']
Let’s say we want to search for people with 5 or 6-figure salaries.
Regex will make it easy for us. Let’s try it out:
import re
salaries = "120000 140000 10000 1000 200"
result_list = re.findall(r"\d{5,6}", salaries)
print result_list # prints out: ['120000', '140000', '10000']
Suppose we wanted to do some string replacement. The re.sub
method will help us do that.
It simply returns a string that has undergone some replacement using a matched pattern.
Let’s write code to do some string replacement:
import re
pattern = re.compile(r"[0-9]+")
result = pattern.sub("__", "there is only 1 thing 2 do")
print result
The program’s aim is to replace any digit in the string with the _
character.
Therefore, the print output will be there is only __ thing __ do
Let’s try out another example. Write down the following code:
import re
pattern = re.compile(r"\w+") # Match only alphanumeric characters
input_string = "Lorem ipsum with steroids"
result = pattern.sub("regex", input_string) # replace with the word regex
print result # prints 'regex regex regex regex'
We have managed to replace the words in the input string with the word “regex”. Regex is very powerful in string manipulations.
Sometimes you might encounter this (?=)
in regex. This syntax defines a look ahead.
Instead of matching from the start of the string, match an entity that’s followed by the pattern.
For instance, r"a (?=b)"
will return a match a
only if it’s followed by b
.
Let’s write some code to elaborate on that.
import re
pattern = re.compile(r'\w+(?=\sfox)')
result = pattern.search("The quick brown fox")
print result.group() # prints 'brown'
The pattern tries to match the closest string that is followed by a space character and the word fox
.
Let’s look at another example. Go ahead and write this snippet:
"""
Match any word followed by a comma.
The example below is not the same as re.compile(r"\w+,")
For this will result in [ 'me,' , 'myself,' ]
"""
pattern = re.compile(r"\w+(?=,)")
res = pattern.findall("Me, myself, and I")
print res
The above regex tries to match all instances
of characters that are followed by a comma
When we run this, we should print out a list containing: [ 'Me', 'myself' ]
What if you wanted to match a string that has a bunch of these special regex characters?
A backlash is used to define special characters in regex. So to cover them as characters in our pattern string, we need to escape them and use \\
.
Here’s an example.
import re
pattern = re.compile('\\\\')
result = pattern.match("\\author")
print result.group() # will print \
Let’s try it one more time just to get it – Suppose we want to include a +
(a reserved quantifier) in a string to be matched by a pattern. We’ll do something like this:
import re
pattern = re.compile(r"\w+\+") # match alphanumeric characters followed by a + character
result = pattern.search("file+")
print result.group() # will print out file+
We have successfully escaped the +
character so that regex might not mistake it for being a quantifier.
For a real-world application, here’s a function that monetizes a number using thousands separator commas.
import re
number = input("Enter your number\n")
def monetizer(number):
"""This function adds a thousands separator using comma characters."""
number = str(number)
try:
if type(int(number)) == int:
# Format into groups of three from the right to the left
pattern = re.compile(r'\d{1,3}(?=(\d{3})+(?!\d))')
# substitute with a comma then return
return pattern.sub(r'\g<0>,', number)
except:
return "Not a Number"
# Function call, passing in number as an argument
print monetizer(number)
As you might have noticed, the pattern uses a look-ahead mechanism. The brackets are responsible for grouping the digits into clusters, which can be separated by the commas.
For example, the number 1223456
will become 1,223,456
.
Congratulations on making it to the end of this intro! From the special sequences of characters, matching and searching, to finding all using reliable look aheads and manipulating strings in regex – we’ve covered quite a lot.
There are some advanced concepts in regex such as backtracking and performance optimization which we can continue to learn as we grow. A good resource for more intricate details would be the re module documentation.
Great job for learning something that many consider difficult!
If you found this helpful, spread the word.
]]>Working with Angular has been a delight. Working with TypeScript, Observables, and the CLI have been great tools for development.
One piece of feedback I’ve noticed when working with Angular is when importing components and classes around your application.
Take the following for example:
import { HeaderComponent } from './components/header/header.component';
import { FooterComponent } from './components/footer/footer.component';
import { GifService } from '../core/services/gif.service';
All the imports here are referenced relatively. It can be a hassle to remember how many folders to jump into and out of.
If you move your files around, you’ll have to update all your import paths.
Let’s look at how we can reference imports absolutely so that TypeScript always looks at the root /src
folder when finding items.
Our goal for this will be to reference things like so:
import { HeaderComponent } from '@app/components/header/header.component';
import { FooterComponent } from '@app/components/footer/footer.component';
import { GifService } from '@app/core/services/gif.service';
This is similar to how Angular imports are referenced using @angular
like @angular/core
or @angular/router
.
Since TypeScript is what is in charge of transpiling our Angular apps, we’ll make sure to configure our paths in tsconfig.json
.
In the tsconfig.json
, we’ll do two things by using two of the compiler options:
baseUrl
: Set the base folder as /src
paths
: Tell TypeScript to look for @app
in the /src/app
folderbaseUrl
will be the base directory that is used to resolve non-relative module names. paths
is an array of mapping entries for module names to locations relative to the baseUrl
.
Here’s the original tsconfig.json
that comes with a new Angular CLI install. We’ll add our two lines to compilerOptions
.
{
"compileOnSave": false,
"compilerOptions": {
...
"baseUrl": "src",
"paths": {
"@app/*": ["app/*"]
}
}
}
With that in our tsconfig.json
, we can now reference items absolutely!
import { HeaderComponent } from '@app/components/header/header.component';
import { FooterComponent } from '@app/components/footer/footer.component';
import { GifService } from '@app/core/services/gif.service';
This is great because we can now move our files around and not have to worry about updating paths everywhere.
For my projects, I like to update the @app
to be something more personalized. Scotch projects use @scotch
and other projects use @batcave
.
HammerJS is a popular library that helps you add support for touch gestures (e.g. swipe, pan, zoom, rotate) to your page.
In this article, we will see how easy Angular 2 can work with HammerJS.
We will be building a carousel of avatars. The user can swipe left or swipe right to view each avatar. Test it out yourself here (works best in mobile, but tested on chrome and firefox desktop browser with an emulator).
Live demo with full source code here: https://plnkr.co/plunker/LCsiXOtzSedGZDbGQ3f8?p=preview
Let’s take a look at how our folder structure looks like. We’ll have an app folder that contains our avatar carousel and main.ts
file for bootstrapping our application.
|- app/
|- app.component.html
|- app.component.css
|- app.component.ts
|- app.module.ts
|- main.ts
|- index.html
|- tsconfig.json
Let’s start with our app component. In this component, we will define our list of avatars and handle the swipe event and show/hide an avatar based on the swipe sequence.
// app/app.component.ts
import { Component } from '@angular/core';
@Component({
moduleId: module.id,
selector: 'my-app',
templateUrl: 'app.component.html',
styleUrls: ['app.component.css']
})
export class AppComponent {
// constant for swipe action: left or right
SWIPE_ACTION = { LEFT: 'swipeleft', RIGHT: 'swiperight' };
// our list of avatars
avatars = [
{
name: 'kristy',
image: 'http://semantic-ui.com/images/avatar2/large/kristy.png',
visible: true
},
{
name: 'matthew',
image: 'http://semantic-ui.com/images/avatar2/large/matthew.png',
visible: false
},
{
name: 'chris',
image: 'http://semantic-ui.com/images/avatar/large/chris.jpg',
visible: false
},
{
name: 'jenny',
image: 'http://semantic-ui.com/images/avatar/large/jenny.jpg',
visible: false
}
];
// action triggered when user swipes
swipe(currentIndex: number, action = this.SWIPE_ACTION.RIGHT) {
// out of range
if (currentIndex > this.avatars.length || currentIndex < 0) return;
let nextIndex = 0;
// swipe right, next avatar
if (action === this.SWIPE_ACTION.RIGHT) {
const isLast = currentIndex === this.avatars.length - 1;
nextIndex = isLast ? 0 : currentIndex + 1;
}
// swipe left, previous avatar
if (action === this.SWIPE_ACTION.LEFT) {
const isFirst = currentIndex === 0;
nextIndex = isFirst ? this.avatars.length - 1 : currentIndex - 1;
}
// toggle avatar visibility
this.avatars.forEach((x, i) => x.visible = (i === nextIndex));
}
}
swipe
.swipe
takes two parameters:
SWIPE_DIRECTION
and the value is swipeleft
or swiperight
.swipe
, swipeleft
, swiperight
, swipeup
, swipedown
. In our case, we just handle swipeleft
and swiperight
.swipe
action later in our HTML view.Here’s our HTML view.
<!-- app/app.component.html -->
<div>
<h4>Swipe Avatars with HammerJS</h4>
<!-- loop each avatar in our avatar list -->
<div class="swipe-box"
*ngFor="let avatar of avatars; let idx=index"
(swipeleft)="swipe(idx, $event.type)" (swiperight)="swipe(idx, $event.type)"
[class.visible]="avatar.visible" [class.hidden]="!avatar.visible">
<div>
<img [src]="avatar.image" [alt]="avatar.name">
</div>
<div>
<a class="header">{{avatar.name}}</a>
</div>
</div>
</div>
avatar
using *ngFor
directive, we declare a local variable idx
to hold the current index of the avatar.swipeleft
and swiperight
event, call the swipe
function that we declared earlier.$event
is the event object. We don’t need the information of the whole $event
object. We need only $event.type
which returns a string of swipeleft
or swiperight
.[class.visible]
or [class.hidden]
. We will add or remove these two CSS classes based on the avatar.visible
property.We can use semantic-ui CSS to ease our styling, but that’s not necessary for our purposes. Apart from that, there are a couple of custom CSS classes that we need to add for our component.
.swipe-box {
display: block;
width: 100%;
float: left;
margin: 0;
}
.visible {
display: block;
}
.hidden {
display: none;
}
.swipe-box
to float all the avatars..visible
and .hidden
are used to show/hide the avatar card.We’re now done with our component. Let’s move on to setting up HammerJS. First, we need to include the HammerJS Javascript file in our main view index.html
file.
<!-- index.html -->
<head>
...
<!-- Hammer JS -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/hammer.js/2.0.8/hammer.js"></script>
....
</head>
By default, the desktop browser doesn’t support the touch event. HammerJS has an extension called touch-emulator.js
that provides a debug tool to emulate touch support in the browser. You can include these lines in the index.html
like this before the HammerJS file:
<!-- index.html -->
<head>
...
<!-- Hammer JS Touch Emulator: Uncomment if for desktop -->
<script src="touch-emulator.js"></script>
<script>
TouchEmulator();
</script>
...
</head>
For details on how the emulator works, please refer to the official documentation.
Because I run this example in Plunker, I just include the HammerJS CDN URL. If you want to manage your package locally, you may run the following command:
- npm install hammerjs --save
Then, include the JS files in your build.
If we do not include HammerJS file, an error message will be thrown: “Hammer.js is not loaded, can not bind swipeleft event”.
By default, if you do not have any custom configuration, you can use HammerJS straight away. Angular2 supports HammerJs out of the box. No need to include anything during application bootstrap. Your application module will look something like this:
// app.module.ts
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from './app.component';
@NgModule({
imports: [ BrowserModule ],
declarations: [ AppComponent ],
bootstrap: [ AppComponent ],
providers: [ ]
})
export class AppModule { }
What if you would like to apply some custom settings like increasing the velocity
and threshold
?
Quick explanation:
There are other settings you can apply as well. For details, refer to HammerJS documentation.
Angular 2 provides a token called HAMMER_GESTURE_CONFIG
which accepts a HammerGestureConfig
type.
In the simplest way, we can extend HammerGestureConfig
like this:
// app.module.ts
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { HammerGestureConfig, HAMMER_GESTURE_CONFIG } from '@angular/platform-browser';
import { AppComponent } from './app.component';
export class MyHammerConfig extends HammerGestureConfig {
overrides = <any>{
'swipe': {velocity: 0.4, threshold: 20} // override default settings
}
}
@NgModule({
imports: [ BrowserModule ],
declarations: [ AppComponent ],
bootstrap: [ AppComponent ],
providers: [ {
provide: HAMMER_GESTURE_CONFIG,
useClass: MyHammerConfig
} ] // use our custom hammerjs config
})
export class AppModule { }
In our case, we just want to override some default settings of the swipe
action. You may implement the HammerGestureConfig class yourself if you want to have more controls.
Take a look at HammerGestureConfig
not so complicated sourcode or the documentation. The whole class only have two properties (events, overrides) and a function (buildHammer).
Angular 2 makes it really easy to integrate with HammerJS for touch gesture event detection.
That’s it. Happy coding!
]]>Views hold the presentation logic of a Laravel application. It is served separately from the application logic using Laravel’s blade templating engine.
Passing data from a controller to a view is as simple as declaring a variable and adding it as a parameter to the returned view helper method. There is no shortage of ways to do this.
We will create a SampleController
class that will handle our logic
- php artisan make:controller SampleController
Here is a sample controller in app/Http/Controllers/SampleController.php
class SampleController extends Controller
{
/**
* pass an array to the 'foo' view
* as a second parameter.
*/
public function foo()
{
return view('foo', [
'key' => 'The quick brown fox jumped over the lazy dog'
]);
}
/**
* Pass a key variable to the 'foo view
* using the compact method as
* the second parameter.
*/
public function bar()
{
$key = 'If a woodchuck could chuck wood,';
return view('foo', compact('key'));
}
/**
* Pass a key, value pair to the view
* using the with method.
*/
public function baz()
{
return view('foo')->with(
'key',
'How much wood would a woodchuck chuck.'
);
}
}
This is all fine and dandy. Well, it is until you try passing data to many views.
More often than not, we need to get some data on different pages. One such scenario would be information on the navigation bar or footer that will be available across all pages on your website, say, the most recent movie in theatres.
For this example, we will use an array of 5 movies to display the latest movie (the last item on the array) on the navigation bar.
For this, I will use a Boostrap template to set up the navigation bar in resources/views/app.blade.php
.
<nav class="navbar navbar-inverse">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="/">Movie Maniac</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav">
<li><a href="foo">Foo</a></li>
<li><a href="bar">Bar</a></li>
<li><a href="baz">Baz</a></li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li><a href="#">latest movie title here</a></li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container-fluid -->
</nav>
The latest movie text on the far right will however be replaced with a title from our movie list to be created later on.
Let’s go ahead and create our movie list on the homepage.
View all the routes in app/Http/routes.php
Route::get('/', 'SampleController@index');
Route::get('/foo', 'SampleController@foo');
Route::get('/bar', 'SampleController@bar');
Route::get('/baz', 'SampleController@baz');
We are just creating four simple routes.
View the controller in app/Http/Controllers/SampleController.php
/**
* Return a list of the latest movies to the
* homepage
*
* @return View
*/
public function index()
{
$movieList = [
'Shawshank Redemption',
'Forrest Gump',
'The Matrix',
'Pirates of the Carribean',
'Back to the Future',
];
return view('welcome', compact('movieList'));
}
See latest movie views in resources/views/welcome.blade.php
@extends('app')
@section('content')
<h1>Latest Movies</h1>
<ul>
@foreach($movieList as $movie)
<li class="list-group-item"><h5>{{ $movie }}</h5></li>
@endforeach
</ul>
@endsection
It goes without saying that my idea of the latest movies is skewed, but we can overlook that for now. Here is what our homepage looks like now.
Awesome! We have our movie list. And now to the business of the day.
We will assume that Back to the future, being the last movie on our movie list, is the latest movie, and display it as such on the navigation bar.
/**
* Return a list of the latest movies to the
* homepage
*
* @return View
*/
public function index()
{
$movieList = [
'Shawshank Redemption',
'Forrest Gump',
'The Matrix',
'Pirates of the Carribean',
'Back to the Future',
];
$latestMovie = end($movieList);
return view('welcome', compact('movieList', 'latestMovie'));
}
We now have Back to the future as our latest movie, and rightfully so because Back to the Future 4 was released a week from now in 1985. I cannot make this stuff up.
This seems to work. Well until you try navigating to other pages (Try one of foo, bar, baz) created earlier on. This will throw an error.
This was expected and by now you must have figured out why this happened. We declared the latest movie variable on the index
method of the controller and passed it to the welcome
biew.
By extension, we made latestMovie
available to the navigation bar BUT only to views/welcome.blade.php
.
When we navigate to /foo
, our navigation bar still expects a latestMovie
variable to be passed to it from the foo
method but we have none to give.
There are three ways to fix this
Declare the latestMovie
value in every other method, and in this case, the movieList
too. It goes without saying we will not be doing this.
Place the movie information in a service provider’s boot method. You can place it on App/Providers/AppServiceProvider
or create one. This quickly becomes inefficient if we are sharing a lot of data.
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
view()->share('key', 'value');
}
View composers are callbacks or class methods that are called when a view is rendered. If you have data that you want to be bound to a view each time that view is rendered, a view composer can help you organize that logic into a single location.
-Laravel documentation
While it is possible to get the data in every controller method and pass it to the single view, this approach may be undesirable.
View composers, as described from the Laravel documentation, bind data to a view every time it is rendered. They clean our code by getting fetching data once and passing it to the view.
Since Laravel does not include a ViewComposers
directory in its application structure, we will have to create our own for better organization. Go ahead and create App\Http\ViewComposers
We will then proceed to create a new service provider to handle all our view composers using the artisan command
- php artisan make:provider ComposerServiceProvider
The service provider will be visible in app/Providers
Add the ComposerServiceProvider
class to config/app.php
array for providers
so that Laravel is able to identify it.
Modify the boot
method in the new Provider by adding a composer
method that extends view()
/**
* Bootstrap the application services.
*
* @return void
*/
public function boot()
{
view()->composer(
'app',
'App\Http\ViewComposers\MovieComposer'
);
}
Laravel will execute a MovieComposer@compose
method every time app.blade.php
is rendered. This means that every time our navigation bar is loaded, we will be ready to serve it with the latest movie from our view composer.
While MovieComposer@compose
is the default method to be called, you can overwrite it by specifying your own custom method name in the boot method.
view()->composer('app', 'App\Http\ViewComposers\MovieComposer@foobar');
Next, we will create our MovieComposer
class
<?php
namespace App\Http\ViewComposers;
use Illuminate\View\View;
class MovieComposer
{
public $movieList = [];
/**
* Create a movie composer.
*
* @return void
*/
public function __construct()
{
$this->movieList = [
'Shawshank Redemption',
'Forrest Gump',
'The Matrix',
'Pirates of the Carribean',
'Back to the Future',
];
}
/**
* Bind data to the view.
*
* @param View $view
* @return void
*/
public function compose(View $view)
{
$view->with('latestMovie', end($this->movieList));
}
}
The with
method will bind the latestMovies to the app
view when it is rendered. Notice that we added Illuminate\View\View
.
We can now get rid of the latestMovie
variable and actually remove it from the compact
helper method in SampleController@index
.
public function index()
{
$movieList = [
'Shawshank Redemption',
'Forrest Gump',
'The Matrix',
'Pirates of the Carribean',
'Back to the Future',
];
return view('welcome', compact('movieList'));
}
We can now access the latest movie on the navigation bar in all our routes.
While we seem to have solved one issue, we seem to have created another. we now have two movieList
arrays. one in the controller and one in the constructor method in MovieComposer
.
While this may have been caused by using an array as a simple data source, it may be a good idea to fix it, say, by creating a movie repository. Ideally, the latestMovie
value would be gotten from a database using Eloquent.
Check out the GitHub repo for this tutorial to see how I created a Movie Repository to get around the redundancy as shown below in MovieComposer
and SampleController
.
public $movieList = [];
/**
* Create a movie composer.
*
* @param MovieRepository $movie
*
* @return void
*/
public function __construct(MovieRepository $movies)
{
$this->movieList = $movies->getMovieList();
}
/**
* Bind data to the view.
*
* @param View $view
* @return void
*/
public function compose(View $view)
{
$view->with('latestMovie', end($this->movieList));
}
public function index(MovieRepository $movies)
{
$movieList = $movies->getMovieList();
return view('welcome', compact('movieList'));
}
It is possible to create a view composer that is executed when all views are rendered by replacing the view name with an asterisk wildcard
view()->composer('*', function (View $view) {
//logic goes here
});
Notice that instead of passing a string with the path to MovieComposer, you can also pass a closure.
You can also limit the view composer to a finite number of views, say, nav
and footer
view()->composer(
['nav', 'footer'],
'App\Http\ViewComposers\MovieComposer'
);
]]>Let’s look at a very common use case. You have a list of items, you need to display all nicely on the screen in card form.
It looks okay but you want all cards to always maintain the same height. It should always match the height of the tallest object, and resize accordingly when screen size changed, like this:
We can achieve this by using creating a custom directive.
Interesting? Let’s code.
GitHub: https://github.com/chybie/ng-musing/tree/master/src/app/same-height
Let’s look at our main component.
import { Component } from '@angular/core';
@Component({
selector: 'page-same-height',
template: `
<main class="container">
<h2>Malaysia States</h2>
<div class="row">
<div class="col-sm-4" *ngFor="let state of list">
<div class="card card-block">
<h4 class="card-title">{{ state.title }}</h4>
<p class="card-text">
{{ state.content }}
</p>
</div>
</div>
</div>
</main>
`
})
export class PageSameHeightComponent {
list = [
{
title: 'Selangor',
content: 'Selangor is a state ....'
},
{
title: 'Kuala Lumpur',
content: 'Kuala Lumpur is the capital of Malaysia...'
},
{
title: 'Perak',
content: 'Perak is a state in the northwest of Peninsular Malaysia...'
}
]
}
The code is pretty expressive itself. We have a list
of states. We loop the list with *ngFor
in our template and display each item accordingly.
Please note that in this example, I am using Boostrap 4 to style the CSS, but it’s not necessary.
There are a few ways to match the height. In this tutorial, we will match height by using the CSS class name.
In our example, we want to match the height of all elements with class name card on the same row. Row
is the parent and all Card
are the children.
To align all the children Card
of the Row
, let’s modify our HTML template and assign an attribute called myMatchHeight
to the row and pass in card
as the attribute value.
...
@Component({
selector: 'page-same-height',
template: `
<main class="container">
<h2>Malaysia States</h2>
<!-- Assign myMatchHeight here -->
<div class="row" myMatchHeight="card">
<div class="col-sm-4" *ngFor="let state of list">
<div class="card card-block">
...
Now you might be wondering, where is the myMatchHeight
coming from? That is the custom directive that we are going to build next!
Let’s create our match height directive.
import {
Directive, ElementRef, AfterViewChecked,
Input, HostListener
} from '@angular/core';
@Directive({
selector: '[myMatchHeight]'
})
export class MatchHeightDirective implements AfterViewChecked {
// class name to match height
@Input()
myMatchHeight: string;
constructor(private el: ElementRef) {
}
ngAfterViewChecked() {
// call our matchHeight function here later
}
matchHeight(parent: HTMLElement, className: string) {
// match height logic here
}
}
@Directive
decorator. We specify [myMatchHeight]
in the selector, this means that we use it as an attribute in any HTML tag. e.g. We use that in our main component.myMatchHeight
with the same name as our selector. By doing this, we can then use it like this myMatchHeight="some_value"
, some_value
will then be assigned to myMatchHeight
variable. e.g. We use that in our main component, we pass in card
as the value.AfterViewChecked
.matchHeight
function.Let’s breakdown what should we do step by step:-
Let’s code it.
...
matchHeight(parent: HTMLElement, className: string) {
// match height logic here
if (!parent) return;
// step 1: find all the child elements with the selected class name
const children = parent.getElementsByClassName(className);
if (!children) return;
// step 2a: get all the child elements heights
const itemHeights = Array.from(children)
.map(x => x.getBoundingClientRect().height);
// step 2b: find out the tallest
const maxHeight = itemHeights.reduce((prev, curr) => {
return curr > prev ? curr : prev;
}, 0);
// step 3: update all the child elements to the tallest height
Array.from(children)
.forEach((x: HTMLElement) => x.style.height = `${maxHeight}px`);
}
...
I guess the code is very expressive itself.
Now that we have completed our match height logic, let’s use it.
...
ngAfterViewChecked() {
// call our matchHeight function here
this.matchHeight(this.el.nativeElement, this.myMatchHeight);
}
...
Remember to import the directive in your app module
and add it in declarations
.
Refresh your browser and you should see all the cards are with same height!
Now, let’s resize your browser. The card height is not adjusted automatically until you refresh the browser again. Let’s update our code.
We need to listen to the window resize event and update the elements’ height.
...
@HostListener('window:resize')
onResize() {
// call our matchHeight function here
this.matchHeight(this.el.nativeElement, this.myMatchHeight);
}
...
Another problem you’ll see is that when you scale down your browser, the height will be updated accordingly (grow taller). However, when you scale up your browser, the height is not updated (not shrink down).
Why is this happening? It is because once the card size grows taller, all content can fit in and no height adjustment is needed.
To solve this, we need to reset the height
of all elements before we recalculate the tallest height. We do this after step 1.
...
matchHeight(parent: HTMLElement, className: string) {
// step 1: find all the child elements with the selected class name
const children = parent.getElementsByClassName(className);
if (!children) return;
// step 1b: reset all children height
Array.from(children).forEach((x: HTMLElement) => {
x.style.height = 'initial';
});
...
}
...
That’s it. Remember that whenever we need to manipulate DOM element, it’s recommended that we do it in the directive. Creating a custom directive and listening to the custom event is easy with Angular Directive.
Our directive works well with any element including nested components too. Check out the source code for another 2 more examples.
GitHub: https://github.com/chybie/ng-musing/tree/master/src/app/same-height
Happy coding!
]]>Redis created by Salvatore Sanfilippo is an open-source, in-memory data structure server with advanced key-value cache and store, often referred to as a NoSQL database. It is also referred to as a data structure server since it can store strings, hashes, lists, sets, sorted sets, and more.
The essence of a key-value store is the ability to store some data, called a value inside a key. This data can later be retrieved only if we know the exact key used to store it.
Salvatore Sanfilippo (creator of Redis) said Redis can be used to replace an RDBMS database. Now, although nothing is impossible, I think it would be a bad idea because using a key-value store for things, like a full-text search, might be painful. Especially, when you consider ACID compliance and syncing data in a key-value store: painful.
Below are just a few uses of Redis, though there are many more than this.
This article’s aim is not to show you the syntax of Redis (you can learn about Redis’s syntax here), in this article, we will learn how to use Redis in PHP.
Redis is pretty easy to install and the instructions, included, are for both Windows and Linux users.
To install Redis on Linux, is pretty simple, but you’ll need TCL installed if you don’t have TCL installed. You can simply run:
- sudo apt-get install tcl
To install Redis:
- wget http://download.redis.io/releases/redis-2.8.19.tar.gz
- tar xzf redis-2.8.19.tar.gz
- cd redis-2.8.19
- make
Note: 2.8.19
should be replaced with the latest stable version of Redis.
All Redis binaries are saved in the src
Folder. To start the server:
- src/redis-server
Redis installation on Windows is very easy, just visit this link, download a package, and install.
Predis is a Redis Client for PHP. It is well written and has a lot of support from the community. To use Predis just clone the repository into your working directory:
- git clone git://github.com/nrk/predis.git
First, we’ll require the Redis Autoloader and register it. Then we’ll wrap the client in a try-catch block. The connection setting for connecting to Redis on a local server is different from connecting to a remote server.
<?php
require "predis/autoload.php";
PredisAutoloader::register();
try {
$redis = new PredisClient();
// This connection is for a remote server
/*
$redis = new PredisClient(array(
"scheme" => "tcp",
"host" => "153.202.124.2",
"port" => 6379
));
*/
}
catch (Exception $e) {
die($e->getMessage());
}
Now that we have successfully connected to the Redis server, let’s start using Redis.
Redis supports a range of datatypes and you might wonder what a NoSQL key-value store has to do with datatypes? Well, these datatypes help developers store data in a meaningful way and can make data retrieval faster. Here are some of the datatypes supported by Redis:
Others are bitmaps and hyperloglogs, but they will not be discussed in this article, as they are pretty dense.
In Redis, the most important commands are SET
, GET
and EXISTS
. These commands are used to store, check, and retrieve data from a Redis server. Just like the commands, the Predis class can be used to perform Redis operations by methods with the same name as commands. For example:
<?php
// sets message to contain "Hello world"
$redis->set(';message';, ';Hello world';);
// gets the value of message
$value = $redis->get('message');
// Hello world
print($value);
echo ($redis->exists('message')) ? "Oui" : "please populate the message key";
INCR
and DECR
are commands used to either decrease or increase a value.
<?php
$redis->set("counter", 0);
$redis->incr("counter"); // 1
$redis->incr("counter"); // 2
$redis->decr("counter"); // 1
We can also increase the values of the counter key by larger integer values or we can decrease the value of the counter key with the INCRBY
and DECRBY
commands.
<?php
$redis->set("counter", 0);
$redis->incrby("counter", 15); // 15
$redis->incrby("counter", 5); // 20
$redis->decrby("counter", 10); // 10
There are a few basic Redis commands for working with lists and they are:
Simple List Usage:
<?php
$redis->rpush("languages", "french"); // [french]
$redis->rpush("languages", "arabic"); // [french, arabic]
$redis->lpush("languages", "english"); // [english, french, arabic]
$redis->lpush("languages", "swedish"); // [swedish, english, french, arabic]
$redis->lpop("languages"); // [english, french, arabic]
$redis->rpop("languages"); // [english, french]
$redis->llen("languages"); // 2
$redis->lrange("languages", 0, -1); // returns all elements
$redis->lrange("languages", 0, 1); // [english, french]
A hash in Redis is a map between one string field and string values, like a one-to-many relationship. The commands associated with hashes in Redis are:
<?php
$key = ';linus torvalds';;
$redis->hset($key, ';age';, 44);
$redis->hset($key, ';country';, ';finland';);
$redis->hset($key, 'occupation', 'software engineer');
$redis->hset($key, 'reknown', 'linux kernel');
$redis->hset($key, 'to delete', 'i will be deleted');
$redis->get($key, 'age'); // 44
$redis->get($key, 'country')); // Finland
$redis->del($key, 'to delete');
$redis->hincrby($key, 'age', 20); // 64
$redis->hmset($key, [
'age' => 44,
'country' => 'finland',
'occupation' => 'software engineer',
'reknown' => 'linux kernel',
]);
// finally
$data = $redis->hgetall($key);
print_r($data); // returns all key-value that belongs to the hash
/*
[
'age' => 44,
'country' => 'finland',
'occupation' => 'software engineer',
'reknown' => 'linux kernel',
]
*/
The list of commands associated with sets includes:
<?php
$key = "countries";
$redis->sadd($key, ';china';);
$redis->sadd($key, ['england', 'france', 'germany']);
$redis->sadd($key, 'china'); // this entry is ignored
$redis->srem($key, ['england', 'china']);
$redis->sismember($key, 'england'); // false
$redis->smembers($key); // ['france', 'germany']
Since Redis is an in-memory data store, you would probably not store data forever. Therefore, this brings us to EXPIRE
, EXPIREAT
, TTL
, PERSIST
:
$key = "expire in 1 hour";
$redis->expire($key, 3600); // expires in 1 hour
$redis->expireat($key, time() + 3600); // expires in 1 hour
sleep(600); // don't try this, just an illustration for time spent
$redis->ttl($key); // 3000, ergo expires in 50 minutes
$redis->persist($key); // this will never expire.
The commands listed in this article are just a handful of many existing Redis commands (see more redis commands).
Redis is a better replacement for memcached, as it is faster, scales better (supports master-slave replication), supports datatypes that many (Facebook, Twitter, Instagram) have dropped memcached for Redis. Redis is open source and many brilliant programmers from the open-source community have contributed patches.
Browserify changed my life.
… My life as a JavaScript developer, anyway.
With Browserify you can write code [in the browser] that uses
require
in the same way that you would use it in Node.
Browserify lets you use require
in the browser, the same way you’d use it in Node. It’s not just syntactic sugar for loading scripts on the client. It’s a tool that brings all the resources npm ecosystem off of the server, and into the client.
Simple, yet immensely powerful.
In this article, we’ll take a look at:
Let’s dive in.
Before we get started, make sure you’ve got Node and npm installed. I’m running Node 5.7.0 and NPM v3.6.0, but versioning shouldn’t be a problem. Feel free to either grab the repo or code along.
Anyone who’s worked with Node will be familiar with its CommonJS style require
function.
require
-ing a module exposes its public API to the file you required it in:
"use strict";
const React = require('react');
let Component = React.createClass ({
/* Using React, save the world */
});
Node’s require
implementation makes modularizing server-side code quite a straightforward task. Install, require, hack: Dead simple.
Module loading in the client is an inherently different beast. In the simplest case, you load your modules in a series of <script>
tags in your HTML. This is perfectly correct, but it can be problematic for two reasons:
<script>
tags, you cannot control [script] loading and executing behavior reliably cross-browser.”The AMD specification and AMD loaders – Require.js being amongst the most popular – came about as solutions to these issues. And, frankly, they’re awesome. There’s nothing inherently wrong with Require.js, or AMD loaders in general, but the solutions furnished by newer tools like Browserify and Webpack bring distinct advantages over those offered by Require.js.
Amongst other things, Browserify:
We’ll take a look at all of this and a whole lot more throughout the article. But first, what’s the deal with Webpack?
The religious wars between users of Angular and Ember, Grunt and Gulp, Browserify and Webpack, all prove the point: Choosing your development tools is serious business.
The choice between Browserify or Webpack depends largely on the tooling workflow you already have and the exigencies of your project. There are a number of differences between their feature sets, but the most important distinction, to my mind, is one of intent:
require
syntax, and provides browser-specific shims for much of Node’s core functionality.If your project and dependencies are already closely tied to the Node ecosystem, Browserify is a solid choice. If you need more power to manage static assets than you can shake a script at, Webpack’s your tool.
I tend to stick with Browserify, as I rarely find myself in need of Webpack’s additional power. You might find Webpack to be a solid choice if your build pipeline gets complex enough, though.
If you decide to check it out, take a look at Front-End Tooling Book’s chapter on Webpack, and Pete Hunt’s Webpack How-To before diving into the official docs.
Note: If you don’t feel like typing or copy/pasting, clone my repo.
Time to get our hands dirty. The first step is to install Browserify. Fire up a terminal and run:
- npm install --global browserify
This installs the Browserify package and makes it available system-wide.
Oh, and if you find yourself needing to use sudo
for this, fix your npm permissions.
Next, let’s give our little project a home. Find a suitable place on your hard drive and make a new folder for it:
- mkdir Browserify_Introduction
- cd Browserify_Introduction
We’ll need a minimal home page, as well. Drop this into index.html
:
<!doctype html>
<html>
<head>
<title>Getting Cozy with Browserify</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
<style>
h1, p, div { text-align: center; }
html { background: #fffffe; }
</style>
</head>
<body>
<div class="container">
<h2>Welcome to the Client Side.</h2>
<div class="well">
<p>I see you've got some numbers. Why not let me see them?</p>
<div id="response">
</div>
</div>
</div>
<script src="main.js"></script>
<script src="bundle.js"></script>
</body>
</html>
On the off chance you’re typing this out by hand, you’ll definitely have noticed the reference to the nonexistent main.js
. Nonexistent files are no fun, so let’s make it exist.
First, install Ramda:
- npm install ramda --save
There’s nothing special about Ramda, by the way. I just chose it because I like it. Any package would do.
Now, drop this into main.js
:
"use strict";
var R = require('ramda');
var square = function square (x) { return x * x; }
var squares = R.chain(square, [1, 2, 3, 4, 5]);
document.getElementById('response').innerHTML = squares;
This is simple, but let’s go step-by-step anyway.
squares
;div
on our page with the id response
, and sets its innerHTML to squares
The important things to note are that we’re using Node’s require
, available only in a Node environment, together with the DOM API, available only in the browser.
That shouldn’t work. And, in fact, it doesn’t. If you open index.html
in your browser and open up the console, you’ll find a ReferenceError
just waiting to grab your attention.
Ew. Let’s get rid of that.
In the same directory housing your main.js
, run:
- browserify main.js -o bundle.js
Now open up index.html
again, and you should see our array of squares smack dab in the middle of the page.
It’s that simple.
When you tell Browserify to bundle up main.js
, it scans the file, and takes note of all the files you require
. It then includes the source of those files in the bundle and repeats the process for its dependencies.
In other words, Browserify traverses the dependency graph, using your main.js
as its entry point, and includes the source of every dependency it finds.
If you open up your bundle.js
, you’ll see this in action. At the top is some obfuscated weirdness; then, a portion with your source code; and finally, the entirety of the Ramda library.
Magic, eh?
Let’s take a look at some additional Browserify fundamentals.
Browserify isn’t limited to concatenating the source of your dependencies: It’s also capable of transforming the code along the way.
“Transform” can mean many things. It can be compiling CoffeeScript to JavaScript, transpiling ES2015 to vanilla JavaScript, or even replacing const with var declarations.
If it’s a change to your code, it counts as a transformation. We’ll take a look at using transforms in the full example, so hang on tight for usage details. For now, be sure to bookmark the growing list of available Browserify transforms for future reference.
One of the disadvantages to transformations – and builds in general – is mangled line references. When your code throws an error, you want the browser to tell you, “take a look at line 57, column 23”. Not, “take a look at variable q on line 1, column 18,278 of main.min.js.”
The solution is source maps. They’re files that tell your browser how to translate between line references in your transformed code and line references in your original source.
With Browserify, enabling source maps is trivial. Run:
- browserify --debug main.js -o bundle.js
The --debug
flag tells Browserify to include source map information in bundle.js
. That’s all you have to add to make it work.
There is one downside to this, though: Adding source maps to bundle.js
makes your bundle twice as large.
That’s fine for development. But making your users download a file twice as big as the one they really need is a bit rude, don’t you think?
The solution is to create two files: One for the source map, one for the bundle. If you’re using Browserify alone, the tool of choice for this is exorcist.
Once you’ve installed it (npm install --global exorcist
), you use it like this:
- browserify main.js --debug | exorcist bundle.map.js > bundle.js
This rips all the source map information out of bundle.js
and spits it into bundle.map.js
instead.
That’s mostly all there is to using Exorcist. Be sure to check the exorcist documentation for the details.
There is a whole swath of tools for Browserify that keep an eye on your files and rebuild your bundle whenever they change. We’ll take a look at two tools: Watchify, and Beefy.
Watchify is a standard tool for automatically rebuilding your bundle.js
whenever you update source files.
First, install it with npm:
- npm install --global watchify
Next, delete your bundle.js
.
Now, navigate to your working directory in a new terminal, and run:
- watchify main.js -o bundle.js -v
The -v
flag tells Watchify to notify you whenever it rebuilds your bundle. It’ll still work if you don’t include it, but you won’t be able to tell it’s doing anything.
That aside, notice that using Watchify is identical to using Browserify! You should have gotten some output, and if you check, you’ll notice a newly updated bundle.js
sitting in your working directory.
Now, open up main.js
and save it without changing anything. You’ll see Watchify rebuild your bundle and spit out some more logs – that’s all it takes to automatically rebuild your bundle when you change your source!
The Watchify repo has all the information on more advanced usage, such as how to use it with Exorcist. Check them out if you need.
If you ran the example, be sure to kill the Watchify process before moving on (just close the terminal you ran it in, or kill $(pgrep node)
if you love you some CLI).
Beefy makes it easy to enable live reload alongside automatic rebuild. It does two big things for you:
Whenever you change anything, it rebuilds your bundle, and – if you tell it to – automatically refreshes your browser with the changes.
If you’re like me and need such a minimal feedback loop, it’s hard to go wrong with Beefy.
To get started, go ahead and install it:
- npm install -g beefy
I’ve installed it globally because I use it so much. If you’d rather use it on a per-project basis, run:
- npm install --save-dev beefy
Either way, using it is straightforward. First, delete your bundle.js
. Then, Spin up a new terminal, navigate to your working directory, and run:
- beefy main.js --live
Beefy should print some information notifying you that it’s listening on http://127.0.0.1:9966
.
If instead, it says, Error: Could not find a suitable bundler!
, run this instead:
- beefy main.js --browserify $(which browserify) --live
The --browserify $(which browserify)
bit tells Beefy to use the global Browserify installation. You don’t need this unless you got the error.
We told Beefy to watch main.js
. If your entry point has a different name – say, app.js
– you’d pass it that instead. The --live
switch tells Beefy to automatically rebuild your bundle and reload the browser whenever you change your source code.
Let’s see it in action. In your browser, navigate to http://localhost:9966
. You should see the same home page we did last time.
Now, open up main.js
, and change squares
:
"use strict";
var R = require('ramda');
var square = function square (x) { return x * x; }
var squares = R.chain(square, [1, 2, 3, 4, 5, 6]);
document.getElementById('response').innerHTML = squares
Save it, and check out the web page. You should see an updated version of it:
And if you were watching it as you saved, you’d have noticed it update in real-time.
Under the hood, Beefy rebuilds your main.js
whenever the server receives a request for bundle.js
. Beefy does not save a bundle.js
to your working directory; when you need one for production, you’ll still have to build that using Browserify. We’ll see how to deal with that inconvenience in just a second.
Again, that’s all there is to it. If you need anything more specific, the documentation’s got your back.
That’s it for Browserify: The Essentials. Let’s build a small Browserify configuration that:
bundle.js
with separate source maps when we build manually.A real, production-quality workflow would do more. But this will show you how to use Browserify to do something nontrivial, and extending it for your own projects should be a cinch.
We’ll be using npm scripts to set this up. In the next section, we’ll do it with Gulp.
Let’s get to it.
We’ll need to install some packages to get this done:
You’ve already got Beefy, so don’t worry about installing it. To grab the others, run:
- npm install --save-dev caching-coffeeify coffeeify minifyify
Now, let’s start building out our scripts. Open up your package.json
. You should find a scripts
key about halfway down; it should include a key called "tests"
.
Right after it, add a "serve"
task:
- "serve" : "beefy main.js --live"
You can see the whole package.json
at my GitHub repo. If you had to use the --browserify $(which browserify)
option earlier, you’ll have to do that here too.
Save that, and back in your terminal, run npm run serve
. You should see the same output we got when we ran Beefy earlier.
You may get an ENOSPC
error. If you do, run npm dedupe
and try again. If that doesn’t help, the top answer on this SO thread will solve the problem.
We just associated a command – beefy main.js --live
– with a script name – serve
. When we run npm run <NAME>
, npm executes the command associated with the name you pass, located in the "scripts"
section of your package.json
. In this case, npm run serve
fires up Beefy.
Sweet start. Let’s finish it up.
Open up package.json
again, and add to your serve
script:
"serve" : "beefy main.js --browserify -t caching-coffeeify --live"
When using Beefy, the --browserify
option lets you pass options to Browserify. The -t
flag tells Browserify you’re about to give it a transform to run. Caching-Coffeeify is a transform that compiles CoffeeScript to JavaScript, and optimizes to make sure it only recompiles what’s changed – whenever you want to compile CoffeeScript on-the-fly like this, Caching-Coffeeify is a better choice than plain ol’ Coffeeify.
Now, we can include CoffeeScript files in our project. To see this in action, create list_provider.coffee
alongside your main.js
:
# list_provider.coffee
"use strict"
module.exports = () => [1, 2, 3, 4, 5]
… And in main.js
:
"use strict";
var R = require('ramda'),
get_list = require('./list_provider.coffee');
var square = function square (x) { return x * x; }
var squares = R.chain(square, get_list());
document.getElementById('response').innerHTML = squares
Now, run npm run serve
, navigate to http://localhost:9966
, and everything should still work.
To add a script that builds out a minified bundle with stripped source maps, open up your package.json
and add:
/* Remainder omitted */
"serve" : "beefy main.js --browserify -t caching-coffeeify --live",
"build" : "browserify main.js --debug -t coffeeify -t -p [ minifyify --map bundle.js.map --output build/bundle.map.js ] > build/bundle.js"
/* Remainder omitted */
Now, in your working directory, run mkdir build
. This is the folder we’ll save our bundle.js
and source map too. Run npm run build
; check what’s in your build folder; and voilà.
I assume you’re already familiar with Gulp. If not, check out the docs.
Using npm scripts is fine for simple setups. But it’s already clear that this can get cumbersome and unreadable.
That’s where Gulp comes in.
In the interest of brevity, we’ll just set up a basic task that does the following:
But if you like bells and whistles, check out the repo. It features a fancy watch task for you to get started with.
As always, the first step is installation:
- npm install -g gulp && npm install gulp --save-dev
We’ll need to install a bit of a toolchain to make this work. Here’s the command; the names of the dependencies are in the Gulpfile below.
- npm install --save-dev vinyl-source-stream vinyl-buffer gulp-livereload gulp-uglify gulp-util gulp babelify babel-preset-es2015 buffer merge rename source sourcemaps watchify
Swell. Now, create a Gulpfile that looks like this:
// Heavily inspired by Mike Valstar's solution:
// http://mikevalstar.com/post/fast-gulp-browserify-babelify-watchify-react-build/
"use strict";
var babelify = require('babelify'),
browserify = require('browserify'),
buffer = require('vinyl-buffer'),
coffeeify = require('coffeeify'),
gulp = require('gulp'),
gutil = require('gulp-util'),
livereload = require('gulp-livereload'),
merge = require('merge'),
rename = require('gulp-rename'),
source = require('vinyl-source-stream'),
sourceMaps = require('gulp-sourcemaps'),
watchify = require('watchify');
var config = {
js: {
src: './main.js', // Entry point
outputDir: './build/', // Directory to save bundle to
mapDir: './maps/', // Subdirectory to save maps to
outputFile: 'bundle.js' // Name to use for bundle
},
};
// This method makes it easy to use common bundling options in different tasks
function bundle (bundler) {
// Add options to add to "base" bundler passed as parameter
bundler
.bundle() // Start bundle
.pipe(source(config.js.src)) // Entry point
.pipe(buffer()) // Convert to gulp pipeline
.pipe(rename(config.js.outputFile)) // Rename output from 'main.js'
// to 'bundle.js'
.pipe(sourceMaps.init({ loadMaps : true })) // Strip inline source maps
.pipe(sourceMaps.write(config.js.mapDir)) // Save source maps to their
// own directory
.pipe(gulp.dest(config.js.outputDir)) // Save 'bundle' to build/
.pipe(livereload()); // Reload browser if relevant
}
gulp.task('bundle', function () {
var bundler = browserify(config.js.src) // Pass browserify the entry point
.transform(coffeeify) // Chain transformations: First, coffeeify . . .
.transform(babelify, { presets : [ 'es2015' ] }); // Then, babelify, with ES2015 preset
bundle(bundler); // Chain other options -- sourcemaps, rename, etc.
})
Now if you run gulp bundle
in your working directory, you’ll have your bundle.js
sitting in build/
, and your bundle.js.map
sitting in build/maps/
.
This config is mostly Gulp-specific detail, so I’ll let the comments speak for themselves. The important thing to note is that, in our bundle
task, we can easily chain transformations. This is a great example of how intuitive and fluent Browserify’s API can be. Check the documentationhttps://github.com/substack/node-browserify for everything else you can do with it.
Whew! What a whirlwind tour. So far, you’ve learned:
That’s more than enough to be productive with Browserify. There are a few links you should bookmark:
And that about wraps it up. If you’ve got questions, comments, or confusion, drop a line in the comments – I’ll get back to you.
Be sure to follow me on Twitter (@PelekeS) if you want a heads-up when I publish something new. Next time, we’ll make that boring home page a lot more interesting by using this tooling alongside React.
Until then, keep getting cozy with Browserify. Go build something incredible.
]]>Note: This article is part of our Easy Node Authentication series.
Authentication and logins in Node can be a complicated thing. Actually logging in for any application can be a pain. This article series will deal with authenticating in your Node application using the package Passport.
Note: Updates
Edit 11/18/2017: Updated to reflect Facebook API changes. Updating dependencies in package.json
.
Edit #1: Changed password hashing to be handled inside user model and asynchronously.
Edit #2: Changed password hashing to be explicitly called. Helps with future tutorials.
We will build an application that will have:
Enough chit-chat. Let’s dive right into a completely blank Node application and build our entire application from scratch.
Here’s what we’ll be building:
And after a user has logged in with all their credentials:
For this article, we’ll be focusing on setup and only local logins and registrations/signups. Since this is the first article and also deals with setting up our application, it will probably be one of the longer ones if not the longest. Sit tight for the duration of your flight.
To set up our base Node application, we’ll need a few things. We’ll set up our npm packages, node application, configuration files, models, and routes.
- app
------ models
---------- user.js <!-- our user model -->
------ routes.js <!-- all the routes for our application -->
- config
------ auth.js <!-- will hold all our client secret keys (facebook, twitter, google) -->
------ database.js <!-- will hold our database connection settings -->
------ passport.js <!-- configuring the strategies for passport -->
- views
------ index.ejs <!-- show our home page with login links -->
------ login.ejs <!-- show our login form -->
------ signup.ejs <!-- show our signup form -->
------ profile.ejs <!-- after a user logs in, they will see their profile -->
- package.json <!-- handle our npm packages -->
- server.js <!-- setup our application -->
Go ahead and create all those files and folders and we’ll fill them in as we go along.
package.json
We are going to install all the packages needed for the entire tutorial series. This means we’ll install all the packages needed for passport
local
, facebook
, twitter
, google
, and the other things we need. It’s all commented out so you know what each does.
{
"name": "node-authentication",
"main": "server.js",
"dependencies" : {
"express" : "~4.14.0",
"ejs" : "~2.5.2",
"mongoose" : "~4.13.1",
"passport" : "~0.3.2",
"passport-local" : "~1.0.0",
"passport-facebook" : "~2.1.1",
"passport-twitter" : "~1.0.4",
"passport-google-oauth" : "~1.0.0",
"connect-flash" : "~0.1.1",
"bcrypt-nodejs" : "latest",
"morgan": "~1.7.0",
"body-parser": "~1.15.2",
"cookie-parser": "~1.4.3",
"method-override": "~2.3.6",
"express-session": "~1.14.1"
}
}
Most of these are pretty self-explanatory.
I use bcrypt-nodejs
instead of bcrypt
since it is easier to set up in Windows.
Now that we have all of our dependencies ready to go, let’s go ahead and install them:
npm install
With all of our packages ready to go, let’s set up our application in server.js
.
server.js
Let’s make all our packages work together nicely. Our goal is to set up this file and try to have it bootstrap our entire application. We’d like to not go back into this file if it can be helped. This file will be the glue for our entire application.
// set up ======================================================================
// get all the tools we need
var express = require('express');
var app = express();
var port = process.env.PORT || 8080;
var mongoose = require('mongoose');
var passport = require('passport');
var flash = require('connect-flash');
var morgan = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');
var session = require('express-session');
var configDB = require('./config/database.js');
// configuration ===============================================================
mongoose.connect(configDB.url); // connect to our database
// require('./config/passport')(passport); // pass passport for configuration
// set up our express application
app.use(morgan('dev')); // log every request to the console
app.use(cookieParser()); // read cookies (needed for auth)
app.use(bodyParser()); // get information from html forms
app.set('view engine', 'ejs'); // set up ejs for templating
// required for passport
app.use(session({ secret: 'ilovescotchscotchyscotchscotch' })); // session secret
app.use(passport.initialize());
app.use(passport.session()); // persistent login sessions
app.use(flash()); // use connect-flash for flash messages stored in session
// routes ======================================================================
require('./app/routes.js')(app, passport); // load our routes and pass in our app and fully configured passport
// launch ======================================================================
app.listen(port);
console.log('The magic happens on port ' + port);
We are going to comment out our passport
configuration for now. We’ll uncomment it after we create that config/passport.js
file.
The path of our passport
object is important to note here. We will create it at the very beginning of the file with var passport = require('passport');
. Then we pass it into our config/passport.js
file for it to be configured. Then we pass it to the app/routes.js
file for it to be used in our routes.
Now with this file, we have our application listening on port 8080. All we have to do to start up our server is:
node server.js
Then when we visit http://localhost:8080
we will see our application. (Not really right this moment since we have some more set up to do)
Auto Refreshing: By default, Node.js doesn’t automatically refresh our server every time we change files. To do that we’ll use nodemon. Just install with: npm install -g nodemon
and use with: nodemon server.js
.
Now, this won’t do much for our application since we don’t have our database configuration, routes, user model, or passport configuration set up. Let’s do the database and routes now.
config/database.js
We already are calling this file in server.js
. Now we just have to set it up.
module.exports = {
'url' : 'your-settings-here' // looks like mongodb://<user>:<pass>@mongo.onmodulus.net:<port>
};
Fill this in with your own database. If you don’t have a MongoDB database lying around, I would suggest going to Modulus.io and grabbing one. Once you sign up (and you get a $15 credit for signing up), you can create your database, grab its connection URL, and place it in this file.
You can also install MongoDB locally and use a local database. You can find instructions here: An Introduction to MongoDB.
app/routes.js
We will keep our routes simple for now. We will have the following routes:
/
)/login
)/signup
) module.exports = function(app, passport) {
// =====================================
// HOME PAGE (with login links) ========
// =====================================
app.get('/', function(req, res) {
res.render('index.ejs'); // load the index.ejs file
});
// =====================================
// LOGIN ===============================
// =====================================
// show the login form
app.get('/login', function(req, res) {
// render the page and pass in any flash data if it exists
res.render('login.ejs', { message: req.flash('loginMessage') });
});
// process the login form
// app.post('/login', do all our passport stuff here);
// =====================================
// SIGNUP ==============================
// =====================================
// show the signup form
app.get('/signup', function(req, res) {
// render the page and pass in any flash data if it exists
res.render('signup.ejs', { message: req.flash('signupMessage') });
});
// process the signup form
// app.post('/signup', do all our passport stuff here);
// =====================================
// PROFILE SECTION =====================
// =====================================
// we will want this protected so you have to be logged in to visit
// we will use route middleware to verify this (the isLoggedIn function)
app.get('/profile', isLoggedIn, function(req, res) {
res.render('profile.ejs', {
user : req.user // get the user out of session and pass to template
});
});
// =====================================
// LOGOUT ==============================
// =====================================
app.get('/logout', function(req, res) {
req.logout();
res.redirect('/');
});
};
// route middleware to make sure a user is logged in
function isLoggedIn(req, res, next) {
// if user is authenticated in the session, carry on
if (req.isAuthenticated())
return next();
// if they aren't redirect them to the home page
res.redirect('/');
}
app.post: For now, we will comment out the routes for handling the form POST. We do this since passport
isn’t set up yet.
req.flash: This is the connect-flash way of getting flashdata in the session. We will create the loginMessage
inside our passport
configuration.
isLoggedIn: Using route middleware, we can protect the profile section route. A user has to be logged in to access that route. Using the isLoggedIn
function, we will kick a user back to the home page if they try to access http://localhost:8080/profile
and they are not logged in.
Logout: We will handle log out by using req.logout()
provided by passport. After logging out, redirect the user to the home page.
With our server running, we can visit our application in our browser athttp://localhost:8080
. Once again, we won’t see much since we haven’t made our views. Let’s go do that now. (We’re almost to the authentication stuff, I promise).
views/index.ejs
, views/login.ejs,
views/signup.ejs
Here we’ll define our views for our home page, login page, and signup/registration page.
views/index.ejs
Our home page will just show links to all our forms of authentication.
<!doctype html>
<html>
<head>
<title>Node Authentication</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.2/css/bootstrap.min.css"> <!-- load bootstrap css -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css"> <!-- load fontawesome -->
<style>
body { padding-top:80px; }
</style>
</head>
<body>
<div class="container">
<div class="jumbotron text-center">
<h1><span class="fa fa-lock"></span> Node Authentication</h1>
<p>Login or Register with:</p>
<a href="/login" class="btn btn-default"><span class="fa fa-user"></span> Local Login</a>
<a href="/signup" class="btn btn-default"><span class="fa fa-user"></span> Local Signup</a>
</div>
</div>
</body>
</html>
Now if we visit our app in our browser, we’ll have a site that looks like this:
Here are the views for our login and signup pages also.
views/login.ejs
<!doctype html>
<html>
<head>
<title>Node Authentication</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.2/css/bootstrap.min.css"> <!-- load bootstrap css -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css"> <!-- load fontawesome -->
<style>
body { padding-top:80px; }
</style>
</head>
<body>
<div class="container">
<div class="col-sm-6 col-sm-offset-3">
<h1><span class="fa fa-sign-in"></span> Login</h1>
<!-- show any messages that come back with authentication -->
<% if (message.length > 0) { %>
<div class="alert alert-danger"><%= message %></div>
<% } %>
<!-- LOGIN FORM -->
<form action="/login" method="post">
<div class="form-group">
<label>Email</label>
<input type="text" class="form-control" name="email">
</div>
<div class="form-group">
<label>Password</label>
<input type="password" class="form-control" name="password">
</div>
<button type="submit" class="btn btn-warning btn-lg">Login</button>
</form>
<hr>
<p>Need an account? <a href="/signup">Signup</a></p>
<p>Or go <a href="/">home</a>.</p>
</div>
</div>
</body>
</html>
views/signup.ejs
<!doctype html>
<html>
<head>
<title>Node Authentication</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.2/css/bootstrap.min.css"> <!-- load bootstrap css -->
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css"> <!-- load fontawesome -->
<style>
body { padding-top:80px; }
</style>
</head>
<body>
<div class="container">
<div class="col-sm-6 col-sm-offset-3">
<h1><span class="fa fa-sign-in"></span> Signup</h1>
<!-- show any messages that come back with authentication -->
<% if (message.length > 0) { %>
<div class="alert alert-danger"><%= message %></div>
<% } %>
<!-- LOGIN FORM -->
<form action="/signup" method="post">
<div class="form-group">
<label>Email</label>
<input type="text" class="form-control" name="email">
</div>
<div class="form-group">
<label>Password</label>
<input type="password" class="form-control" name="password">
</div>
<button type="submit" class="btn btn-warning btn-lg">Signup</button>
</form>
<hr>
<p>Already have an account? <a href="/login">Login</a></p>
<p>Or go <a href="/">home</a>.</p>
</div>
</div>
</body>
</html>
Finally! We have finally set up our application and have gotten to the authentication part. Don’t worry. The rest of the authentication articles in this tutorial series will use the same base so we won’t have to set up our application again.
So far we have installed our packages, set up our application, connected to our database, created our routes, and created our views.
Now we will create our user model, configure passport for local authentication, and use our configured passport
to process our login/signup forms.
We will create our user model for the entire tutorial series. Our user will have the ability to be linked to multiple social accounts and to a local account. For local accounts, we will be keeping email and password. For the social accounts, we will be keeping their id, token, and some user information.
You can change these fields out to be whatever you want. You can authenticate locally using username and password (passport-local
actually uses username by default but we’ll change that to email).
// load the things we need
var mongoose = require('mongoose');
var bcrypt = require('bcrypt-nodejs');
// define the schema for our user model
var userSchema = mongoose.Schema({
local : {
email : String,
password : String,
},
facebook : {
id : String,
token : String,
name : String,
email : String
},
twitter : {
id : String,
token : String,
displayName : String,
username : String
},
google : {
id : String,
token : String,
email : String,
name : String
}
});
// methods ======================
// generating a hash
userSchema.methods.generateHash = function(password) {
return bcrypt.hashSync(password, bcrypt.genSaltSync(8), null);
};
// checking if password is valid
userSchema.methods.validPassword = function(password) {
return bcrypt.compareSync(password, this.local.password);
};
// create the model for users and expose it to our app
module.exports = mongoose.model('User', userSchema);
Our model is done. We will be hashing our password within our user model before it saves to the database. This means we don’t have to deal with generating the hash ourselves. It is all handled nicely and neatly inside our user model.
Let’s move on to the important stuff of this article: authenticating locally!
All the configuration for passport
will be handled in config/passport.js
. We want to keep this code in its own file away from our other main files like routes or the server file. I have seen some implementations where passport
will be configured in random places. I believe having it in this config file will keep your overall application clean and concise.
So far, we have created our passport
object in server.js
, and then we pass it to our config/passport.js
file. This is where we configure our Strategy for local
, facebook
, twitter
, and google
. This is also the file where we will create the serializeUser
and deserializeUser
functions to store our user in session.
I would highly recommend going to read the passport docs to understand more about how the package works.
We will be handling login and signup in config/passport.js
. Let’s look at signup first.
// load all the things we need
var LocalStrategy = require('passport-local').Strategy;
// load up the user model
var User = require('../app/models/user');
// expose this function to our app using module.exports
module.exports = function(passport) {
// =========================================================================
// passport session setup ==================================================
// =========================================================================
// required for persistent login sessions
// passport needs ability to serialize and unserialize users out of session
// used to serialize the user for the session
passport.serializeUser(function(user, done) {
done(null, user.id);
});
// used to deserialize the user
passport.deserializeUser(function(id, done) {
User.findById(id, function(err, user) {
done(err, user);
});
});
// =========================================================================
// LOCAL SIGNUP ============================================================
// =========================================================================
// we are using named strategies since we have one for login and one for signup
// by default, if there was no name, it would just be called 'local'
passport.use('local-signup', new LocalStrategy({
// by default, local strategy uses username and password, we will override with email
usernameField : 'email',
passwordField : 'password',
passReqToCallback : true // allows us to pass back the entire request to the callback
},
function(req, email, password, done) {
// asynchronous
// User.findOne wont fire unless data is sent back
process.nextTick(function() {
// find a user whose email is the same as the forms email
// we are checking to see if the user trying to login already exists
User.findOne({ 'local.email' : email }, function(err, user) {
// if there are any errors, return the error
if (err)
return done(err);
// check to see if theres already a user with that email
if (user) {
return done(null, false, req.flash('signupMessage', 'That email is already taken.'));
} else {
// if there is no user with that email
// create the user
var newUser = new User();
// set the user's local credentials
newUser.local.email = email;
newUser.local.password = newUser.generateHash(password);
// save the user
newUser.save(function(err) {
if (err)
throw err;
return done(null, newUser);
});
}
});
});
}));
};
We have now provided a strategy to passport
called local-signup. We will use this strategy to process our signup form. Let’s open up our app/routes.js
and handle the POST for our signup form.
...
// process the signup form
app.post('/signup', passport.authenticate('local-signup', {
successRedirect : '/profile', // redirect to the secure profile section
failureRedirect : '/signup', // redirect back to the signup page if there is an error
failureFlash : true // allow flash messages
}));
...
That’s all the code we need for the route. All of the heavy-duty stuff lives inside of config/passport.js
. All we have to set here is where our failures and successes get redirected. Super clean.
There is also much more you can do with this. Instead of specifying a successRedirect
, you could use a callback and take more control over how your application works. Here is a great stackoverflow answer on error handling. It explains how to use done()
and how to be more specific with your handling of a route.
With our passport
config finally laid out, we can uncomment that line in our server.js
. This will load our config and then we can use our signup form.
...
// uncomment this line
require('./config/passport')(passport); // pass passport for configuration
...
Now that we have passport
, our routes, and our redirects in place, let’s go ahead and test our signup form. In your browser, go to http://localhost:8080/signup
and fill out your form.
If all goes according to plan, you should be logged in, your user saved in the session, and you are redirected to the /profile
page (the profile page will show nothing right now since we haven’t defined that view).
If we look in our database, we’ll also see our user sitting there cozily with all the credentials we created for him.
Exploring Your Database: I use Robomongo to see what’s in my database. Just download it and connect to your database to see your new users after they signup!
With users able to sign up, let’s give them a way to log in.
This will be very similar to the signup strategy. We’ll add the strategy to our config/passport.js
and the route in app/routes.js
.
...
// =========================================================================
// LOCAL LOGIN =============================================================
// =========================================================================
// we are using named strategies since we have one for login and one for signup
// by default, if there was no name, it would just be called 'local'
passport.use('local-login', new LocalStrategy({
// by default, local strategy uses username and password, we will override with email
usernameField : 'email',
passwordField : 'password',
passReqToCallback : true // allows us to pass back the entire request to the callback
},
function(req, email, password, done) { // callback with email and password from our form
// find a user whose email is the same as the forms email
// we are checking to see if the user trying to login already exists
User.findOne({ 'local.email' : email }, function(err, user) {
// if there are any errors, return the error before anything else
if (err)
return done(err);
// if no user is found, return the message
if (!user)
return done(null, false, req.flash('loginMessage', 'No user found.')); // req.flash is the way to set flashdata using connect-flash
// if the user is found but the password is wrong
if (!user.validPassword(password))
return done(null, false, req.flash('loginMessage', 'Oops! Wrong password.')); // create the loginMessage and save it to session as flashdata
// all is well, return successful user
return done(null, user);
});
}));
};
We have now provided a strategy to passport
called local-login. We will use this strategy to process our login form. We can check if a user exists, if the password is wrong, and set flash data to show error messages. Let’s open up our app/routes.js
and handle the POST for our login form.
...
// process the login form
app.post('/login', passport.authenticate('local-login', {
successRedirect : '/profile', // redirect to the secure profile section
failureRedirect : '/login', // redirect back to the signup page if there is an error
failureFlash : true // allow flash messages
}));
...
If you try to log in with a user email that doesn’t exist in our database, you will see the error. The same goes for if your password is wrong.
views/profile.ejs
Now we have functional signup and login forms. If a user is successful in authenticating they will be redirected to the profile page. If they are not successful, they will go home. The last thing we need to do is make our profile page so that those that are lucky enough to signup (all of us?) will have an exclusive place of our site all to themselves.
<!doctype html>
<html>
<head>
<title>Node Authentication</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.2/css/bootstrap.min.css">
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css">
<style>
body { padding-top:80px; word-wrap:break-word; }
</style>
</head>
<body>
<div class="container">
<div class="page-header text-center">
<h1><span class="fa fa-anchor"></span> Profile Page</h1>
<a href="/logout" class="btn btn-default btn-sm">Logout</a>
</div>
<div class="row">
<!-- LOCAL INFORMATION -->
<div class="col-sm-6">
<div class="well">
<h3><span class="fa fa-user"></span> Local</h3>
<p>
<strong>id</strong>: <%= user._id %><br>
<strong>email</strong>: <%= user.local.email %><br>
<strong>password</strong>: <%= user.local.password %>
</p>
</div>
</div>
</div>
</div>
</body>
</html>
After a user logs in, they can see all their information. It is grabbed from the session and passed to our view in the app/routes.js
file. We will also provide a link to log out.
There you have it! We’ve built a brand new application from scratch and have the ability to let users signup/register and log in. We even have support for flash messages, hashing passwords, and requiring a login for some sections of our site using route middleware.
Coming up next we’ll be looking at how to take this same structure, and use passport
to authenticate with Facebook, Twitter, and Google. After that, we’ll look at how we can get all these things working together in the same application. Users will be able to log in with one type of account, and then link their other accounts.
As always, if you see any ways to improve this or need any clarification, sound off in the comments and we’ll respond pretty close to immediately… pretty close.
]]>Routing is a key aspect of web applications (and even other platforms) could not be left out in React. We can make full-fleshed single-page applications with React if we harness the powers of routing. This does not have to be a manual process, we can make use of React-Router.
In this guide, we will touch almost every aspect related to routing in React and there will be a demo so you will as well have something to play with.
You do not need to have a high experience level to follow along. The basics of React components and how to use them are enough for you to follow along in this tutorial.
You are not going to only learn how to route a React application but I will also show you the basics of tooling React using Babel, npm, and Webpack. Before we start building, let’s set that up and see what our folder structure will look like.
First, create a new project:
- mkdir scotch-cars
- cd scotch-cars
- npm init
Follow npm init
wizard then install the following tooling dependencies:
- npm install webpack babel-loader babel-preset-es2015 babel-preset-react serve --save-dev
We installed the following tooling dependencies:
Next is to configure our loader which is Webpack. Webpack is configured using a config file. So touch
the file and update the config content as follows:
- touch webpack.config.js
var webpack = require('webpack');
var path = require('path');
var BUILD_DIR = path.resolve(__dirname, 'src/client/public');
var APP_DIR = path.resolve(__dirname, 'src/client/app');
var config = {
entry: APP_DIR + '/index.jsx',
output: {
path: BUILD_DIR,
filename: 'bundle.js'
},
module : {
loaders : [
{
test : /\.jsx?/,
include : APP_DIR,
loader : 'babel'
}
]
}
};
module.exports = config;
The most important aspect of a Webpack config is the exported config object. The minimal code above just needs an entry point, entry
which is where bundling needs to begin. It also requires an output, output
which is where the bundled result is dumped, and then module
, which defines what loaders should be used during the bundling process. In our case, babel
is the loader we need.
We need to explicitly tell Babel which presets it should make use of. You can do this with package.json
or in a .babelrc
file. .babelrc
file is what you will see in most projects, so let’s follow that:
- touch .babelrc
Define the presets:
{
"presets" : ["es2015", "react"]
}
To run Webpack we have to use a reference to the bin every time which would cause friction to our dev process. What we can do is set up scripts in the package.json
to help us with that:
"scripts": {
"watch" : "webpack -d --watch",
"build" : "webpack",
"serve" : "serve ./public"
}
The public
directory will need an entry index.html
which is very simple:
<html>
<head>
<!--Stylesheet-->
<link rel="stylesheet" href="style.css">
</head>
<body>
<!--Container for React rendering-->
<div id="container"></div>
<!--Bundled file-->
<script src="bundle.js"></script>
</body>
</html>
Loads the bundle and defines the DOM element to mount our React app.
Let us now define our folder structure so as to have a view of the task at hand before we start building:
# Folder Structure
|---public
|------index.html # App entry
|------style.css # Custom style
|------bundle.js # Generated
|---src # Components live here
|------car
|---------car.component.jsx
|---------car-detail.component.jsx
|------common
|---------about.component.jsx
|---------home.component.jsx
|---------main.component.jsx
|------index.jsx # Build entry
|---.babelrc # Babel config file
|---index.js
|---package.json
|---webpack.config.js # Webpack config gile
The following are wireframes of what we are up to in this tutorial:
Now that we have got a simple environment for React to live in, the next step is to set it up for routing.
React likes to keep things as simple as possible and that is why the core library just does exactly what React is about components. Routing, DOM rendering, and other logic are abstracted to a different library. To use routing, we have to pull down React, React Router, and React DOM:
- npm install react react-dom react-router --save
A basic React component would look like this:
import React, { Component } from 'react';
import { render } from 'react-dom';
class Home extends Component {
render(){
return (<h1>Hi</h1>);
}
}
render(<Home />, document.getElementById('container'));
You can start watching and building these now with
npm run watch
. Then open another terminal and runnpm run serve
to start the server.
Adding routing features to this app is very simple. Instead of rendering the Home
component, we import Router
and Route
and use them to render the component:
import React, { Component } from 'react';
import { render } from 'react-dom';
// Import routing components
import {Router, Route} from 'react-router';
class Home extends Component {
render(){
return (<h1>Hi</h1>);
}
}
render(
<Router>
<!--Each route is defined with Route-->
<Route path="/" component={Home}/>
</Router>,
document.getElementById('container')
);
The path
attribute defines the route URL and component
attribute defines the component for this route.
This kind of routing is different from what you might have seen in other UI frameworks and it is known as component routing. It is very easy to reason about because routes are also treated the same way components are treated. Routes are first-class components.
We discuss the ugly URL later in this article.
You do not need routing if the only thing you want is a single path/page as our existing example shows. Let’s add more routes to the application:
import React, { Component } from 'react';
import { render } from 'react-dom';
// Import routing components
import {Router, Route} from 'react-router';
class Home extends Component {
render(){
return (<h1>Home Page</h1>);
}
}
// More components
class Car extends Component {
render(){
return (<h1>Cars page</h1>);
}
}
class About extends Component {
render(){
return (<h1>About page</h1>);
}
}
render(
<Router>
<Route path="/" component={Home}/>
<Route path="/cars" component={Car}/>
<Route path="/about" component={About}/>
</Router>,
document.getElementById('container')
);
Let us do a little bit of refactoring and concern separation because that is what goes down in a real app:
import React, { Component } from 'react';
class Car extends Component {
render(){
return (<h1>Cars page</h1>);
}
}
export default Car
import React, { Component } from 'react';
class Home extends Component {
render(){
return (<h1>Home Page</h1>);
}
}
export default Home
import React, { Component } from 'react';
class About extends Component {
render(){
return (<h1>About Page</h1>);
}
}
export default About
We just split the codes in to separate files while being guided by our predefined folder structure. Let’s assemble in the index
file:
import React, { Component } from 'react';
import { render } from 'react-dom';
// Import routing components
import {Router, Route} from 'react-router';
// Import custom components
import Home from './common/home.component.jsx'
import About from './common/about.component.jsx'
import Car from './car/car.component.jsx'
render(
<Router>
<Route path="/" component={Home}/>
<Route path="/cars" component={Car}/>
<Route path="/about" component={About}/>
</Router>,
document.getElementById('container')
);
Nothing lost and we have a better app.
This might be a better time to invite Bootstrap to the party. Of course, our app can’t remain that ugly. Import Bootstrap in the ./public/index.html
and allow it to do its magic:
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" >
Component routes are first-class components in React, so when it comes to parenting/ownership, the same rule applies. Our app is expected to have a navigation menu that is accessible by all the main routes. We can make another parent route for all the existing routes which will have the nav-bar:
import React, {Component} from 'react';
class Main extends Component {
render(){
return(
<div>
<nav className="navbar navbar-default">
<div className="container-fluid">
<div className="navbar-header">
<a className="navbar-brand" href="#">Scotch Cars</a>
</div>
<div className="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul className="nav navbar-nav">
<li className="active"><a href="#">Link <span className="sr-only">(current)</span></a></li>
<li><a href="#">Link</a></li>
</ul>
</div>
</div>
</nav>
<div className="container">
<!-- Mount child routes -->
{this.props.children}
</div>
</div>
);
}
}
export default Main
In the route setup, add another Route
component to render
that wraps the rest of the routes;
render(
<Router>
<Route component={Main}>
<Route path="/" component={Home}/>
<Route path="/cars" component={Car}/>
<Route path="/about" component={About}/>
</Route>
</Router>,
document.getElementById('container')
);
Just like every other component, the contents of the child routes are poured out where ever {this.props.children}
is found on the parent route.
Routes can be prefixed with React Router. Route prefixing is very common when building API endpoints where we have something like:
- https://example.com/api/cars
api/
is the route prefix and we can do this with React Router for nested routes:
<Router>
<Route component={Main} path="app">
<Route path="/" component={Home}/>
<Route path="/cars" component={Car}/>
<Route path="/about" component={About}/>
</Route>
</Router>
The path
attribute will prefix all the child routes path with its value and that gives us:
- /app
- /app/cars
- /app/cars
There is another option to defining the root of your app (a.k.a index). IndexRoute
is another component in React-Router that handles this:
<Router>
<Route path="/" component={Main} path="app">
<IndexRoute component={Home} />
<Route path="/cars" component={Car}/>
<Route path="/about" component={About}/>
</Route>
</Router>
You need to import the component from React-Router:
import {Router, Route, IndexRoute} from 'react-router';
History is a term that covers everything it takes to manage location, history, and URL in React-Router.
Up till now, we have been dealing with an ugly URL. That is not the best React-Router can offer. React-Router offers three ways to manage URLs in React apps:
At the moment our app defaults to hashHistory
and that is what is responsible for the ugly URL. browserHistory
is the recommended option for user consumption. We just need to tell React-Router to use browserHistoty
:
<Router>
<Route path="/" component={Main} history={browserHistory}>
<IndexRoute component={Home} />
<Route path="/cars" component={Car}/>
<Route path="/about" component={About}/>
</Route>
</Router>
Import browserHistory
from React-Router:
import {Router, Route, IndexRoute, browserHistory} from 'react-router';
You also need to set the base URL in the head of index.html
before it works as expected:
<base href="/" />
This works fine until you navigate to another page and reload:
This shouldn’t surprise you though because now we are making a request back to the server which does not even handle a wildcard route. Let’s use express
to fix this by creating a backend custom server with a wildcard route URL that takes us back to where we were when a reload happens:
const express = require('express')
const path = require('path')
const port = process.env.PORT || 3000
const app = express()
// serve static assets normally
app.use(express.static(__dirname + '/public'))
// Handles all routes so you do not get a not found error
app.get('*', function (request, response){
response.sendFile(path.resolve(__dirname, 'public', 'index.html'))
})
app.listen(port)
console.log("server started on port " + port)
Now we can’t use serve again so we just run with Node:
"scripts": {
"watch" : "webpack -d --watch",
"build" : "webpack",,
"start" : "node index.js"
}
Our application is becoming more interesting and it is time to add some route links. We need to navigate with clicks and not changing URL values. Before we discuss how to add links, let’s populate our application with mock cars:
import React, { Component } from 'react';
class Car extends Component {
// Constructor is responsible for setting up props and setting initial stte
constructor(props){
// Pass props to the parent component
super(props);
// Set initial state
this.state = {
// State needed
cars: []
};
}
componentDidMount(){
// Static data
const data = [
{
id: 1,
name: 'Honda Accord Crosstour',
year: '2010',
model: 'Accord Crosstour',
make: 'Honda',
media: 'http://www.example.com/honda/accord-crosstour/2010/oem/2010_honda_accord-crosstour_4dr-hatchback_ex-l_fq_oem_4_500.jpg',
price: '$16,811'
},
{
id: 2,
name: 'Mercedes-Benz AMG GT Coupe',
year: '2016',
model: 'AMG',
make: 'Mercedes Benz',
media: 'http://www.example.com/mercedes-benz/amg-gt/2016/oem/2016_mercedes-benz_amg-gt_coupe_s_fq_oem_1_717.jpg',
price: '$138,157'
},
{
id: 3,
name: 'BMW X6 SUV',
year: '2016',
model: 'X6',
make: 'BMW',
media: 'http://www.example.com/bmw/x6/2016/oem/2016_bmw_x6_4dr-suv_xdrive50i_fq_oem_1_717.jpg',
price: '$68,999'
},
{
id: 4,
name: 'Ford Edge SUV',
year: '2016',
model: 'Edge',
make: 'Ford',
media: 'http://www.example.com/ford/edge/2016/oem/2016_ford_edge_4dr-suv_sport_fq_oem_6_717.jpg',
price: '$36,275'
},
{
id: 5,
name: 'Dodge Viper Coupe',
year: '2017',
model: 'Viper',
make: 'Dodge',
media: 'http://www.example.com/dodge/viper/2017/oem/2017_dodge_viper_coupe_acr_fq_oem_3_717.jpg',
price: '$123,890'
}
];
// Update state
this.setState({cars: data});
}
render(){
// Map through cars and return linked cars
const carNode = this.state.cars.map((car) => {
return (
<a
href="#"
className="list-group-item"
key={car.id}>
{car.name}
</a>
)
});
return (
<div>
<h1>Cars page</h1>
<div className="list-group">
{carNode}
</div>
</div>
);
}
}
export default Car
We updated our Car component to present a list of data. The data is a static array, no need for the complexity of request as this article is only about routing.
With some static data available, let’s tackle this topic. Links in React routing work well with the anchor tag but this is not recommended. Link
is a component that uses anchor internally and is the recommended way for displaying links because it plays nicer with React Router:
<Link to="/">Home</Link>
That is how links are used and the to
property defines the path we want to navigate to on click just like href
. Let’s update our Main
component to apply links:
import React, {Component} from 'react';
import { Link } from 'react-router';
class Main extends Component {
render(){
return(
<div>
<nav className="navbar navbar-default">
<div className="container-fluid">
<div className="navbar-header">
<a className="navbar-brand" href="#">Scotch Cars</a>
</div>
<div className="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul className="nav navbar-nav">
{/* Change from a to Link */}
<li><Link to="/">Home</Link></li>
<li><Link to="/cars">Cars</Link></li>
<li><Link to="/about">About</Link></li>
</ul>
</div>
</div>
</nav>
<div className="container">
{this.props.children}
</div>
</div>
);
}
}
export default Main
We first import the Link component from React-Router then use the component for the navigation menu rather than <a>
.
For a better user experience, it is a good practice to let the user know where he/she is at by indicating with a contrasting style on the active link. Let’s define a style in our style.css
for that:
a.active {
color: #000000 !important;
text-decoration: underline !important;
}
Then we use React-Router’s activeClassName
to active this style every time a respective link is activated:
<li><Link to="/" activeClassName="active">Home</Link></li>
<li><Link to="/cars" activeClassName="active">Cars</Link></li>
<li><Link to="/about" activeClassName="active">About</Link></li>
We need route parameters when requesting a single item or resource for a page. Take for instance:
- /cars/3
- /cars/honda-crosstour
id
and honda-crosstour
are route parameters and we can use the value to retrieve a single car. During specification, the URLs are represented like this:
- /cars/:id
- /cars/:name
We will make use of only id
in this demo.
First thing to do is define a route that should have a route parameter:
render(
<Router history={browserHistory}>
<Route component={Main}>
<Route path="/" component={Home}/>
<Route path="/cars" component={Car}/>
{/* Parameter route*/}
<Route path="/cars/:id" component={CarDetail}/>
<Route path="/about" component={About}/>
</Route>
</Router>,
document.getElementById('container')
);
The spotlight is on:
<Route path="/cars/:id" component={CarDetail}/>
The path
shows that a dynamic value is expected at the id
placeholder. The CarDetail
does not exist yet so let’s make that:
import React, { Component } from 'react';
class CarDetail extends Component {
render(){
return (<h1>{this.props.params.id}</h1>);
}
}
export default CarDetail
Like every other component but the parameter is accessed via props
:
this.props.params.id
Don’t forget to import CarDetail
in the root index
Let’s use this ID to filter the cars array. Before we can do that we need to move the cars data array to a file that both Car
and CarDetail
component can have access to it. That should be the root then we can pass it down to the components as route props:
import React, { Component } from 'react';
import { render } from 'react-dom';
// Import routing components
import {Router, Route, IndexRoute, browserHistory} from 'react-router';
import Main from './common/main.component.jsx'
import Home from './common/home.component.jsx'
import About from './common/about.component.jsx'
import Car from './car/car.component.jsx'
import CarDetail from './car/car-detail.component.jsx'
const data = [
{
id: 1,
name: 'Honda Accord Crosstour',
year: '2010',
model: 'Accord Crosstour',
make: 'Honda',
media: 'http://www.example.com/honda/accord-crosstour/2010/oem/2010_honda_accord-crosstour_4dr-hatchback_ex-l_fq_oem_4_500.jpg',
price: '$16,811'
},
{
id: 2,
name: 'Mercedes-Benz AMG GT Coupe',
year: '2016',
model: 'AMG',
make: 'Mercedes Benz',
media: 'http://www.example.com/mercedes-benz/amg-gt/2016/oem/2016_mercedes-benz_amg-gt_coupe_s_fq_oem_1_717.jpg',
price: '$138,157'
},
{
id: 3,
name: 'BMW X6 SUV',
year: '2016',
model: 'X6',
make: 'BMW',
media: 'http://www.example.com/bmw/x6/2016/oem/2016_bmw_x6_4dr-suv_xdrive50i_fq_oem_1_717.jpg',
price: '$68,999'
},
{
id: 4,
name: 'Ford Edge SUV',
year: '2016',
model: 'Edge',
make: 'Ford',
media: 'http://www.example.com/ford/edge/2016/oem/2016_ford_edge_4dr-suv_sport_fq_oem_6_717.jpg',
price: '$36,275'
},
{
id: 5,
name: 'Dodge Viper Coupe',
year: '2017',
model: 'Viper',
make: 'Dodge',
media: 'http://www.example.com/dodge/viper/2017/oem/2017_dodge_viper_coupe_acr_fq_oem_3_717.jpg',
price: '$123,890'
}
];
render(
<Router history={browserHistory}>
<Route component={Main}>
<Route path="/" component={Home}/>
<Route path="/cars" component={Car} data={data}/>
<Route path="/cars/:id" component={CarDetail} data={data}/>
<Route path="/about" component={About}/>
</Route>
</Router>,
document.getElementById('container')
);
We now have the data array in the index.jsx
then we pass it down as a route prop:
<Route path="/cars" component={Car} data={data}/>
<Route path="/cars/:id" component={CarDetail} data={data}/>
Finally, we update the Car
component to use this data. The state is no longer needed so we can get rid of it and fetch the data from route props:
import React, { Component } from 'react';
import { Link } from 'react-router';
class Car extends Component {
render(){
// Get data from route props
const cars = this.props.route.data;
// Map through cars and return linked cars
const carNode = cars.map((car) => {
return (
<Link
to={"/cars/"+car.id}
className="list-group-item"
key={car.id}>
{car.name}
</Link>
)
});
return (
<div>
<h1>Cars page</h1>
<div className="list-group">
{carNode}
</div>
</div>
);
}
}
export default Car
The fresh thing to learn is that we access data differently because the data was passed on to a route, not a component. Instead of:
this.props.data
we have:
this.props.route.data
We also used the opportunity to use Link instead of anchor tags for navigation which points:
<Link
to={"/cars/"+car.id}
className="list-group-item"
key={car.id}>
{car.name}
</Link>
We can filter this data now with the parameter in CarDetail
:
import React, { Component } from 'react';
class CarDetail extends Component {
render(){
// Car array
const cars = this.props.route.data;
// Car Id from param
const id = this.props.params.id;
// Filter car with ID
const car = cars.filter(car => {
if(car.id == id) {
return car;
}
});
return (
<div>
<h1>{car[0].name}</h1>
<div className="row">
<div className="col-sm-6 col-md-4">
<div className="thumbnail">
<img src={car[0].media} alt={car[0].name} />
</div>
</div>
<div className="col-sm-6 col-md-4">
<ul>
<li><strong>Model</strong>: {car[0].model}</li>
<li><strong>Make</strong>: {car[0].make}</li>
<li><strong>Year</strong>: {car[0].year}</li>
<li><strong>Price</strong>: {car[0].price}</li>
</ul>
</div>
</div>
</div>
);
}
}
export default CarDetail
Redirecting is quite an easy one. We can make use of browserHistory
’s push
method to redirect. For example, we can add a button to the details page that redirects to the list page on click:
import React, { Component } from 'react';
import { browserHistory } from 'react-router';
class CarDetail extends Component {
handleRedirect(){
browserHistory.push('/cars');
}
render(){
return(
// ... preceding codes
<div className="col-md-12">
<button className="btn btn-default" onClick={this.handleRedirect.bind(this)}>Go to Cars</button>
</div>
// ... succeeding codes
)
}
}
It is a common practice to restrict users from accessing a particular resource because of limitations placed on their roles in the given app. We can’t afford to allow a buyer to have access to the admin dashboard where prices can be changed. Though this logic is something that MUST be handled backend but for a better user experience, it is also important on the frontend:
const requireAuth = (nextState, replace) => {
if (!auth.isAdmin()) {
// Redirect to Home page if not an Admin
replace({ pathname: '/' })
}
}
export const AdminRoutes = () => {
return (
<Route path="/admin" component={Admin} onEnter={requireAuth} />
)
}
We are using the onEnter
lifecycle event to listen to when this route will be hit. Once that happens, a check is run to determine if the authenticated user is an administrator or not.
This was a long read but if you followed along, you have the basics of what you need to get going with React. This article does not just serve as a tutorial but also a reference for your day-by-day routing solutions in React with React Router.
]]>When building Angular applications, one of the cornerstones we will use is ng-repeat
. Showing data is something that we do in applications like when we show a table of users or whatever other data we need to show our users. Today we’ll be looking at a way to sort and filter our tabular data. This is a common feature that is always useful so let’s look at what we’ll be building and dive right into the code.
Here’s a quick demo: http://codepen.io/sevilayha/pen/AmFLE/
Our application will allow us to:
ng-repeat
)orderBy
)filter
)These are three common functions in any application and Angular lets us implement these features in a very simple way. Let’s set up our sample application’s HTML and Angular parts and then look at how we can sort and filter.
We’ll be using Bootstrap and Font Awesome to style our app. Let’s take a look at the Angular module first. This will be a simple module with one controller where we define a few variables and the list of data we’ll show to our users (we’ll be using sushi, yum). The files we will need are:
index.html
app.js
Our demo is built inside of CodePen, so you can create the two files above or just work within your own CodePen. Here is the code for app.js
// app.js
angular.module('sortApp', [])
.controller('mainController', function($scope) {
$scope.sortType = 'name'; // set the default sort type
$scope.sortReverse = false; // set the default sort order
$scope.searchFish = ''; // set the default search/filter term
// create the list of sushi rolls
$scope.sushi = [
{ name: 'Cali Roll', fish: 'Crab', tastiness: 2 },
{ name: 'Philly', fish: 'Tuna', tastiness: 4 },
{ name: 'Tiger', fish: 'Eel', tastiness: 7 },
{ name: 'Rainbow', fish: 'Variety', tastiness: 6 }
];
});
We have set the 3 variables and the list of sushi. Now let’s use this module in our HTML. Here is the HTML we’ll need for index.html
:
<div class="container" ng-app="sortApp" ng-controller="mainController">
<div class="alert alert-info">
<p>Sort Type: {{ sortType }}</p>
<p>Sort Reverse: {{ sortReverse }}</p>
<p>Search Query: {{ searchFish }}</p>
</div>
<table class="table table-bordered table-striped">
<thead>
<tr>
<td>
Sushi Roll
</a>
</td>
<td>
Fish Type
</a>
</td>
<td>
Taste Level
</a>
</td>
</tr>
</thead>
<tbody>
<tr ng-repeat="roll in sushi">
<td>{{ roll.name }}</td>
<td>{{ roll.fish }}</td>
<td>{{ roll.tastiness }}</td>
</tr>
</tbody>
</table>
</div>
We are loading Bootstrap, Font Awesome, and Angular. We will also apply the Angular module named sortApp
and the Angular controller called mainController
to the tag.
We are also using an ngRepeat
to loop over the sushi in our $scope.sushi
array we created in our Angular module.
Great. We have the list of data displayed all nicely for our users. Now let’s offer them some functionality by letting them sort the table.
We will be accomplishing this sorting feature using two of the variables that we created earlier ($scope.sortType
and $scope.sortReverse
). We will also be using the Angular orderBy
filter. Basically, applying a combination of sortType
and sortReverse
variables to an orderBy
clause in our ng-repeat
will sort the table.
<tr ng-repeat="roll in sushi | orderBy:sortType:sortReverse">
That’s all we need to change the sort order of our ngRepeat
. If you refresh your page, you’ll see that your list is sorted by name
in normal order. Now go into your Angular module and change the sortType
variable to $scope.sortType = 'fish'
and refresh the page. You’ll now see the table sorted by Fish Type. The next step is to change the headings of our table so that they will change the sortType
variable. That will automatically sort our table without refreshing the page (as is the Angular way).
We’ll be adding links to our table headings. Let’s look at the thead
section of our site and use ng-click
to adjust the sortType
variable.
<td>
<a href="#" ng-click="sortType = 'name';">
Sushi Roll
</a>
</td>
Now as you click the links across your table headers, you’ll see your table sorted since we are setting the sortType
variable using ng-click
.
Next up, we’ll be adding a way to change the sort order so users can sort by ascending or descending. The orderBy filter arguments offer a third parameter for reverse
. We just have to pass in true or false to change the sort order. Currently, we have it set to false
since we defined that as one of the variables earlier ($scope.sortReverse
). The way we will give users the option to reverse the sort is to add sortReverse = !sortReverse
in the ng-click
of our table headers. This will change the order if users click the link.
<td>
<a href="#" ng-click="sortType = 'name'; sortReverse = !sortReverse">
Sushi Roll
</a>
</td>
Just add that to all the other ng-click
s in the code as well. Now if you click your header links, you’ll see the sort order changing. This isn’t very intuitive right now though since we don’t provide any sort of visual feedback that the sort is changing. Let’s add carets to show up and down to represent our current sort order. We’ll add an up and down caret here and then use ngShow
and ngHide
to show and hide the caret based on order.
<td>
<a href="#" ng-click="sortType = 'name'; sortReverse = !sortReverse">
Sushi Roll
<span ng-show="sortType == 'name' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'name' && sortReverse" class="fa fa-caret-up"></span>
</a>
</td>
Now the up arrow will only show if sortType
is name
and sortReverse
is set to true
. Applying this to the other headers will give you the same effect. With a few Angular directives, we are now able to show proper feedback for sorting and for sort order. The last part of this tutorial will deal with filtering the table of data.
Filtering data in an ng-repeat
is fairly easy since Angular also comes with the filter module. There are only two things we need to do here: create the form and apply the form variable to the ng-repeat
.
Let’s create the form first. Above the code for the table
and below the code for the alert
.
<form>
<div class="form-group">
<div class="input-group">
<div class="input-group-addon"><i class="fa fa-search"></i></div>
<input type="text" class="form-control" placeholder="Search the Fish" ng-model="searchFish">
</div>
</div>
</form>
A lot of that is Bootstrap markup to style our form beautifully, but the line we need to pay attention to is the input
. This is where we define our ng-model
to adjust the searchFish
variable. Now as we type into that input box, you should see that variable change in the alert box above. With that variable bound and ready to go, all we have to do is apply it to our ng-repeat
.
Just like that, our filter will now be applied to the table. Go ahead and type into your filter box and see the table data change. You’ll also notice that the orderBy and filter will work together to find you the exact sushi roll that you want.
Using some built-in Angular tools like ngRepeat
, orderBy
, filter
, ngClick
, and ngShow
, we’ve been able to build a clean and fast table of data. This is a great look at the power that Angular has in building some great functionality into your own applications without too much work.
For more Angular goodness, here are some other great articles:
]]>Writing styles for large applications can be a really challenging task as styles get easily mixed up and confusing. The major issue is usually encountered when trying to structure your styles and give proper naming of individual styles.
With time, patterns were introduced to enhance style organization and most of these patterns are implemented when we make use of pre-processors like Sass and Less. The significant thing about these patterns is that they suggest organizing our styles and templates in the form of COMPONENTS.
Angular 2 is component-based which means that every UI functionality is built as a component. Therefore, as component-based styling is a recommended pattern, Angular 2 is just about to make writing styles a rather enjoyable experience. We will discuss different styling techniques and how to use them, but before that, we need to understand the concept of Shadow DOM and View Encapsulation.
Shadow DOM is included in the Web Components standard by W3C. Shadow DOM basically allows a group of DOM implementations to be hidden inside a single element (which is the basic idea of components) and encapsulate styles to the element. This means that encapsulated styles will only be available for that group of DOM elements and nothing more.
Remember that the idea of web components and shadow DOM is relatively new and not all browsers can handle the concept. This is where one of the major advantages of Angular 2 comes in as it allows us to choose whether to implement Shadow DOM, just emulate it (default), or not use it at all. This technique of handling Shadow DOM in Angular 2 is known as View Encapsulation.
The 3 states of view encapsulation are:
Setting encapsulation is quite simple and is done right inside the @component decorator:
@Component({
templateUrl: 'card.html',
styles: [`
.card {
height: 70px;
width: 100px;
}
`],
encapsulation: ViewEncapsulation.Native
// encapsulation: ViewEncapsulation.None
// encapsulation: ViewEncapsulation.Emulated is default
})
Now that we have taken some time to put Shadow DOM and View Encapsulation straight, we can go ahead to understand the different techniques of styling an Angular component. Cards are common components that we are familiar with, so permit me to use them for the illustrations.
This technique is the most obvious styling technique in Angular 2. This is because it is recommended, makes sense with the concept of components in mind and found everywhere in the Angular 2 documentation. It is implemented in the @Component
decorator of our component class like so:
@Component({
templateUrl: 'card.html',
styles: [`
.card {
height: 70px;
width: 100px;
}
`],
})
The expected behavior in various view encapsulation techniques are:
style
tag and pushed to the head
.style
tag, pushed to head
, and uniquely identified so it can be matched with its component’s template. With that, the styles will be used for only the template in the same component.Just like our everyday method of including styles from external styles which have an extension of .css
, we could also import external styles in an Angular 2 component. It is as simple as importing templates with the templateUrl
property in @Component
decorator.
@Component({
styleUrls: ['css/style.css'],
templateUrl: 'card.html',
})
The expected behavior in various view encapsulation techniques are:
style
tag and pushed to the head
. It is appended right after the component inline style.style
tag, pushed to head
, and uniquely identified so it can be matched with its component’s template just like the component inline style. As you can see, you must have guessed wrong if you expected the style to be imported with link
This is achievable with two methods:
style
tag and placed before the templates:@Component({
template: `
<style>
h1 {
color: purple;
}
</style>
<h1>Styling Angular Components</h1>
`
})
@Component({
template: '<h1 style="color:pink">Styling Angular Components</h1>'
})
The expected behavior in various view encapsulation techniques are:
style
tag and pushed to the head
. It is appended right after the component inline and external styles. For method 2, the style just remains in the tag.style
tag, pushed to head
, and uniquely identified so it can be matched with its component’s template just like the component inline style. For method 2, the style still remains in the tag.This is the point where we need to pay attention as it can be quite tricky. If you have been following the article carefully, you will realize that component styles, if any, are always appended to the head
first.
Where it then becomes confusing is that in the first method of template inline styles are appended before the external styles. This makes external styles take precedence because in CSS the last is the greatest.
To better understand priorities, I have created a Plunk with all the styling techniques we discussed. What I suggest is that you switch these styles, mess around with them and see the results. The comment section of this article is a great place to discuss your findings.
Whatever method you choose is accepted and that is the good thing about components and Angular 2. You don’t have to listen to the preaching of not using internal styles or inline styles as they are within components and will be scoped. On the other hand, we are now able to organize our code better in a modular pattern.
Angular 2 is awesome, right?
]]>Today we’ll be looking at how we can use Angular’s ngShow
and ngHide
directives to do exactly what the directives sound like they do, show and hide!
ngShow
and ngHide
allow us to display or hide different elements. This helps when creating Angular apps since our single-page applications will most likely have many moving parts that come and go as the state of our application changes.
The great part about these directives is that we don’t have to do any of the showing or hiding ourselves with CSS or JavaScript. It is all handled by good old Angular.
To use either ngShow
or ngHide
, just add the directive to the element you’d like to show or hide.
<!-- FOR BOOLEAN VALUES =============================== -->
<!-- for true values -->
<div ng-show="hello">this is a welcome message</div>
<!-- can also show if a value is false -->
<div ng-show="!hello">this is a goodbye message</div>
<!-- FOR EXPRESSIONS =============================== -->
<!-- show if the appState variable is a string of goodbye -->
<div ng-show="appState == 'goodbye'">this is a goodbye message</div>
<!-- FOR FUNCTIONS =============================== -->
<!-- use a function defined in your controller to evaluate if true or false -->
<div ng-hide="checkSomething()"></div>
Once we have that set in our markup, we can set the hello
or goodbye
variables in a number of different ways. You could set it in your Angular controller and have the div
show or hide when your app loads up.
All of the above can be used for ng-show
or ng-hide
. This will just hide something if the value, expression, or function returns true
.
See the Pen How To Use ngShow and ngHide by Chris Sevilleja (@sevilayha) on CodePen.
We will create our link that uses ng-click
and will toggle the goCats
variable to true
or false
.
<a href ng-click="goCats = !goCats">Toggle Cats</a>
Then we can show or hide the cats image using ng-show
.
<img ng-src="http://i.imgur.com/vkW3Lhe.jpg" ng-show="goCats">
ng-src: We use ng-src
for the images so that Angular will instantiate and check to see if the image should be hidden. If we didn’t have this, the image would pop up on site load and then disappear once Angular realized it was supposed to be hidden.
See the Pen How To Use ngShow and ngHide by Chris Sevilleja (@sevilayha) on CodePen.
Here we evaluate a string coming from our input box. We bind that input box using ng-model
to our variable: animal
. Depending on what that string is, a different image will show.
We will bind our input box to a variable called animal
.
<input type="text" ng-model="animal">
Then we will use ng-show
to evaluate the string.
<img ng-src="http://i.imgur.com/vkW3Lhe.jpg" ng-show="animal == 'cat'">
See the Pen How To Use ngShow and ngHide by Chris Sevilleja (@sevilayha) on CodePen.
Here we will do a simple check to see if the number entered is even or odd. We will create the function in our AngularJS file:
// set the default value of our number
$scope.myNumber = 0;
// function to evaluate if a number is even
$scope.isEven = function(value) {
if (value % 2 == 0)
return true;
else
return false;
};
Once we have our function, all we have to do is call it using ng-show
or ng-hide
and pass in our number. By passing in our number through the function, it keeps our Angular controller clean and testable.
<!-- show if our function evaluates to false -->
<div ng-show="isEven(myNumber)">
<h2>The number is even.</h2>
</div>
<!-- show if our function evaluates to false -->
<div ng-show="!isEven(myNumber)">
<h2>The number is odd.</h2>
</div>
With these two great directives, we can do great things with our applications. These are simple examples for showing and hiding elements based on booleans
, expressions
, and functions
, but these three can be used to do many different things for your application.
Hopefully, this helps when building great AngularJS based applications. In the future, we’ll be talking about animating ngShow
and ngHide
to create some great moving applications.
I’ve started building out our JavaScript Glossary and when I got to the replace()
method, I had to build out a snippet to handle replacing all occurrences of a string in a string.
myMessage.replace('sentence', 'message');
https://gist.github.com/chris-sev/7be587f89ba2ee18f105a57a791a2c18
Normally String replace()
only replaces the first instance it finds. If we want JavaScript to replace all, we’ll have to use a regular expression using /g
.
myMessage.replace(/sentence/g, 'message');
https://gist.github.com/chris-sev/452b0b9c2ff1d4ddf1ae3449f90ef595
In addition to using the inline /g
, we can use the constructor function of the RegExp object.
myMessage.replace(new RegExp('sentence', 'g'), 'message');
https://gist.github.com/chris-sev/fcd4396ee879d3ccc306512a59e2608a
To replace special characters like -/\^$*+?.()|[]{})
we’ll need to use a \
backslash to escape them.
Here we’ll replace all the -
in this string with just -
. I ran into this when building out the Scotch dashboard with markdown trying to escape all my symbols.
// replace - with -
myUrl.replace(/-/g, '-');
// or with RegExp
myUrl.replace(new RegExp('-', 'g'), '-');
https://gist.github.com/chris-sev/d1d233fb4ff5264cd50b8208a03dcf84
]]>Writing tests for an application that relies on external services, say, a RESTful API, is challenging. More often than not, an external resource may require authentication, authorization or may have a rate limiting. Hitting an endpoint for a service hosted in a service like AWS as part of testing would also incur extra charges.
This quickly goes out of hand when you are running tests a couple of times a day as a team, as well as part of continuous integration. Nock, a HTTP mocking and expectations library for Node.js can be used to avoid this.
By the end of this post, we will have achieved the following.
To get started, create a simple Node.js application by creating an empty folder and running npm init
.
- mkdir nock-tests
- cd nock-tests
- npm init
Next, we will install the following packages for our application and testing environment.
- npm install --save axios
- npm install --save-dev mocha chai nock
Our tests will live inside a test directory, Go ahead and create a test directory and create our first test.
- mkdir test
- touch test/index.test.js
Our first test should be pretty straightforward. Assert that true is, well, true.
const expect = require('chai').expect;
describe('First test', () => {
it('Should assert true to be true', () => {
expect(true).to.be.true;
});
});
To run our test, we could run the mocha
command from our node_modules
but that can get annoying. We are instead going to add it as an npm script.
{
"name": "nock-tests",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "node_modules/.bin/mocha"
},
"author": "",
"license": "ISC",
"dependencies": {
"axios": "^0.16.2"
},
"devDependencies": {
"chai": "^4.0.2",
"mocha": "^3.4.2",
"nock": "^9.0.13"
}
}
At this point, running npm test
on your command-line should give you the following result.
- npm test
Output> nock-tests@1.0.0 test /Users/username/projects/nock-tests
> mocha
First test
✓ Should assert true to be true
1 passing (15ms)
We obviously don’t have any tests making requests, or a useful request for that matter, but we will be changing that.
Let’s go ahead and write a simple function that makes a HTTP request to the GitHub API to get a user by username. Go ahead and create an index.js
file in the root and add the following code.
const axios = require('axios');
module.exports = {
getUser(username) {
return axios
.get(`https://api.github.com/users/${username}`)
.then(res => res.data)
.catch(error => console.log(error));
}
};
Our test will assert that the request made returns an object with specific details. Replace the truthy test we created earlier with the following test for our code.
const expect = require('chai').expect;
const getUser = require('../index').getUser;
describe('Get User tests', () => {
it('Get a user by username', () => {
return getUser('octocat')
.then(response => {
//expect an object back
expect(typeof response).to.equal('object');
//Test result of name, company and location for the response
expect(response.name).to.equal('The Octocat')
expect(response.company).to.equal('GitHub')
expect(response.location).to.equal('San Francisco')
});
});
});
Let’s break down the test.
getUser
method from /index.js
.This should pass on running the test by actually making a request to the GitHub API.
Let’s fix this!
Nock works by overriding Node’s http.request function. Also, it overrides http.ClientRequest too to cover for modules that use it directly.
With Nock, you can specify the HTTP endpoint to mock as well as the response expected from the request in JSON format. The whole idea behind this is that we are not testing the GitHub API, we are testing our own application. For this reason, we make the assumption that the GitHub API’s response is predictable.
To mock the request, we will import nock into our test and add the request and expected response in the beforeEach
method.
const expect = require('chai').expect;
const nock = require('nock');
const getUser = require('../index').getUser;
const response = require('./response');
describe('Get User tests', () => {
beforeEach(() => {
nock('https://api.github.com')
.get('/users/octocat')
.reply(200, response);
});
it('Get a user by username', () => {
return getUser('octocat')
.then(response => {
//expect an object back
expect(typeof response).to.equal('object');
//Test result of name, company and location for the response
expect(response.name).to.equal('The Octocat')
expect(response.company).to.equal('GitHub')
expect(response.location).to.equal('San Francisco')
});
});
});
The expected response is defined as an export in a separate file.
module.exports = { login: 'octocat',
id: 583231,
avatar_url: 'https://avatars0.githubusercontent.com/u/583231?v=3',
gravatar_id: '',
url: 'https://api.github.com/users/octocat',
html_url: 'https://github.com/octocat',
followers_url: 'https://api.github.com/users/octocat/followers',
following_url: 'https://api.github.com/users/octocat/following{/other_user}',
gists_url: 'https://api.github.com/users/octocat/gists{/gist_id}',
starred_url: 'https://api.github.com/users/octocat/starred{/owner}{/repo}',
subscriptions_url: 'https://api.github.com/users/octocat/subscriptions',
organizations_url: 'https://api.github.com/users/octocat/orgs',
repos_url: 'https://api.github.com/users/octocat/repos',
events_url: 'https://api.github.com/users/octocat/events{/privacy}',
received_events_url: 'https://api.github.com/users/octocat/received_events',
type: 'User',
site_admin: false,
name: 'The Octocat',
company: 'GitHub',
blog: 'http://www.github.com/blog',
location: 'San Francisco',
email: null,
hireable: null,
bio: null,
public_repos: 7,
public_gists: 8,
followers: 1840,
following: 6,
created_at: '2011-01-25T18:44:36Z',
updated_at: '2017-07-06T21:26:58Z' };
To test that this is the actual response expected in the test, try editing one of the fields in the response object and run the test again. The tests should fail.
In my case, I will be changing the name value to Scotch
. You should get the error below.
We have only scratched the surface on what you can do with nock. It has a very detailed documentation on how to use it and it is worth exploring. For instance, If you are writing tests that involve error handling, you could mock error responses with an error message.
nock('http://www.google.com')
.get('/cat-poems')
.replyWithError('something awful happened');
Happy testing!
]]>Organizing routing in large applications and APIs so that it’s easy to find and maintain in the future can be challenging.
As codebases get more mature and complex, they grow in size. You have unit tests, right? One place where things get hairy is with the organization of the code that handles your routing.
Routing is extremely important. It defines the URL structure that someone uses to interact with your web application. If it’s not organized well, it can be hard to find the logic for “that one route that never seems to work and all our customers are complaining about. What the hell was the original developer thinking anyway!”
There’s light at the end of the tunnel, folks. Express apps utilize routers that are essentially containers for a set of middleware. We can put this middleware holder only on a certain route, which allows us to keep our logic in separate files and bring them together on our terms! We are going to build a very simple car API. We will be implementing the following routes in the application.
GET /models
GET /models/:modelId
GET /models/:modelId/cars
GET /cars
GET /cars/:carId
Let’s get started by creating a project directory and downloading the dependencies we will use.
- mkdir car-api
- cd car-api
- npm init -y
- npm i -S express
- npm i -D nodemon
Also, let’s add an npm script to our package.json
to start our application.
"start": "nodemon app.js"
Now let’s create a file that will hold our JSON data for this API. Put the following in a file at the root called data.json
.
{
"models": [
{
"id": 1,
"name": "Toyota"
},
{
"id": 2,
"name": "Mazda"
}
],
"cars": [
{
"id": 1,
"name": "Corolla",
"modelId": 1
},
{
"id": 2,
"name": "Mazda3",
"modelId": 2
},
{
"id": 3,
"name": "Mazda6",
"modelId": 2
},
{
"id": 4,
"name": "Miata",
"modelId": 2
},
{
"id": 5,
"name": "Camry",
"modelId": 1
},
{
"id": 6,
"name": "CX-9",
"modelId": 2
}
]
}
Create a file in the root of our project called app.js
and put the following into it.
// Bring in our dependencies
const app = require('express')();
const routes = require('./routes');
// Connect all our routes to our application
app.use('/', routes);
// Turn on that server!
app.listen(3000, () => {
console.log('App listening on port 3000');
});
We are bringing in Express and our routes. We are connecting our routes to our application using .use
. Lastly, we are turning the server on. Let’s create those routes now. Create a file at routes/index.js
and put the following in it.
const routes = require('express').Router();
routes.get('/', (req, res) => {
res.status(200).json({ message: 'Connected!' });
});
module.exports = routes;
This is a pattern you will see throughout this tutorial. First, we are requiring Express and creating a new instance of Router
on it. We are holding that in a variable called routes
. Then we are creating a route at the root path of this Router
that sends back a simple message. Then we export the Router
.
This Router
is the “container” for the middleware on this route. Notice in app.js
that we imported this module, which exports a Router
, and attaches it at the root path of our API.
That means the middleware and routes attached to this Router
will be run as long as we are accessing a route that starts at the root path, which always happens! And since we added a route at the root of this router, that means it will be hit when someone visits our root path.
Start up the server with npm start
and view localhost:3000
in your browser. You should see our message popup. We will follow a similar pattern for the rest of the API.
![](https://scotch-res.cloudinary.com/image/upload/media/14/JE3AioKVTJC3YdrmPb0e_Screen Shot 2016-03-23 at 10.47.58 PM.png)
Let’s create our model routes now. Create a file at routes/models/index.js
. Put the code below in it.
const models = require('express').Router();
const all = require('./all');
models.get('/', all);
module.exports = models;
This should look almost dead on the previous one we created. This time we are bringing in a route from a file in the same directory called all.js
. Let’s create that next. Put the following in a file at routes/models/all.js
.
const data = require('../../data.json');
module.exports = (req, res) => {
const models = data.models;
res.status(200).json({ models });
};
We are importing our data. Then grabbing all the models and returning them in the response. Our server should be restarting on its own.
So visit localhost:3000/models
in the browser… yeah. We get an error. Why doesn’t it know about our new route? It’s because we never connected the models Router
to our routes Router
. Add the following into routes/index.js
.
const models = require('./models');
routes.use('/models', models);
![](https://scotch-res.cloudinary.com/image/upload/media/14/2MbtZjSqeFnzFktjwZDU_Screen Shot 2016-03-23 at 10.53.49 PM.png)
This imports all our model routes and attaches them to the main router of our application. Now you should be able to see all our models in the browser. Let’s make the next route so we can get only one model. Put the following in routes/models/single.js
.
const data = require('../../data.json');
module.exports = (req, res) => {
const modelId = req.params.modelId * 1;
const model = data.models.find(m => m.id === modelId);
res.status(200).json({ model });
};
We are finding the model and returning it. The line req.params.modelId * 1
simply coerces our modelId
from a string into an integer. We need it as an integer since that is how it’s stored in our data file. Add the following to routes/models/index.js
to connect this route to our application.
const single = require('./single');
models.get('/:modelId', single);
Now try going to localhost:3000/models/2
in your browser. You’ll see information about Mazda. Excellent!
![](https://scotch-res.cloudinary.com/image/upload/media/14/u455OeaRDKpYGNVKPHRq_Screen Shot 2016-03-23 at 10.54.51 PM.png)
We need to add cars
as a nested resource of models. We can do this by simply creating another Router
and attaching it to our models Router
. First, add the following to routes/models/index.js
.
const cars = require('./cars');
models.use('/:modelId/cars', cars);
Notice how we are putting this next router behind a route that starts with a model ID and ends with cars. That means anything on this next router will have to start with that plus anything this current router had to start with to be hit (/models/:modelId/cars
). Next, create a file at routes/models/cars/index.js
and put the following in it.
const cars = require('express').Router({ mergeParams: true });
const all = require('./all');
cars.get('/', all);
module.exports = cars;
This should look very familiar by now! The only new thing here is the mergeParams: true
option passed when creating the Router
. This tells the Router
to merge parameters that are created on this set of routes with the ones from its parents. Without this, we wouldn’t have access to the modelId
from any of the routes connected to this Router
.
Now we just need to create the logic for our all
route. Create a file at /routes/models/cars/all.js
, and put the following in it:
const data = require('../../../data.json');
module.exports = (req, res) => {
const modelId = req.params.modelId * 1;
const cars = data.cars.filter(c => c.modelId === modelId);
res.status(200).json({ cars });
};
Pretty simple. Just getting all the cars that have a modelId
matching the one in the URL. Try it out in the browser. Check out localhost:3000/models/1/cars
. You should see all cars in our “database” made by Toyota.
![](https://scotch-res.cloudinary.com/image/upload/media/14/esXWHu2CTHCx2zPodhB6_Screen Shot 2016-03-23 at 10.56.00 PM.png)
Now, we need to make routes for cars, but they need to be at the top level, not behind models
. Because of the way we have organized things, this is a cinch. Add the following to routes/index.js
.
const cars = require('./cars');
routes.use('/cars', cars);
Here we’re simply attaching a new Router
to our main Router
. Let’s create it now. Create a file at /routes/cars/index.js
and put the following in it.
const cars = require('express').Router();
const all = require('./all');
const single = require('./single');
cars.get('/', all);
cars.get('/:carId', single);
module.exports = cars;
This should look extremely familiar, so I won’t bore you with another explanation. Put the following in a file at routes/cars/all.js
.
const data = require('../../data.json');
module.exports = (req, res) => {
const cars = data.cars;
res.status(200).json({ cars });
};
Lastly, insert the following into a file at routes/cars/single.js
.
const data = require('../../data.json');
module.exports = (req, res) => {
const carId = req.params.carId * 1;
const car = data.cars.find(c => c.id === carId);
res.status(200).json({ car });
};
Go to the browser and check out our new route. You should be able to see all the cars in our database at localhost:3000/cars
and see data about the Mazda6 at localhost:3000/cars/3
. Pretty sweet!
![](https://scotch-res.cloudinary.com/image/upload/media/14/r3OViSHPTWKkaGB9euPG_Screen Shot 2016-03-23 at 10.57.02 PM.png)
![](https://scotch-res.cloudinary.com/image/upload/media/14/lALjJXyUTOOWdqctORo9_Screen Shot 2016-03-23 at 10.57.51 PM.png)
Now we have a working API, but there is something bothering me. When we try to see data about a model that doesn’t exist, like /models/200
, we should get a 404
error, but we don’t. We get an empty object. That’s not cool with me. Let’s fix this.
Instead of implementing this in every single route handler we have, let’s create some middleware that will do it for us. We can add some middleware onto our routers that will only get called when those routes are hit. Let’s add a param
middleware to our models Router
to make sure the model exists. If it doesn’t, we want to return a 404
.
Add the following to routes/models/index.js
:
const data = require('../../data.json');
models.param('modelId', (req, res, next, value) => {
const model = data.models.find(m => m.id === (value * 1));
if (model) {
req['model'] = model;
next();
} else {
res.status(404).send('Invalid model ID');
}
});
We are importing our data
. Then we add a param
middleware to the Router
. This middleware will be called whenever modelId
is present in the URL. This is perfect since this is what we want to validate! We are finding the model, and if it doesn’t exist, we return a 404
. If it does exist, we put it on the request for later usage and then move on to the next piece of middleware.
Now try viewing a model that doesn’t exist in the browser, say localhost:3000/models/200
. You can see we now get a 404
error. Yay! Also notice that if you try to access any routes nested behind this route, it will 404
also. Try going to localhost:3000/models/200/cars
. Do you get a 404
? Boom!
Now let’s add it to our cars routes. Add the following to routes/cars/index.js
.
const data = require('../../data.json');
cars.param('carId', (req, res, next, value) => {
const car = data.cars.find(c => c.id === (value * 1));
if (car) {
req['car'] = car;
next();
} else {
res.status(404).send('Invalid car ID');
}
});
Give it a shot at localhost:3000/cars/200
. I’ve never been so happy to see a 404
.
Now we have some pretty sweet stuff going on here, but I’m still not satisfied. The code we just added to each of those index files looks way too similar. Refactoring to the rescue!
Create a file at utils/findObject.js
and put the following in it.
const data = require('../data.json');
module.exports = type => {
return (req, res, next, value) => {
const typePlural = `${type}s`;
const obj = data[typePlural].find(t => t.id === (value * 1));
if (obj) {
req[type] = obj;
next();
} else {
res.status(404).send(`Invalid ${type} ID`);
}
};
};
This should look very familiar. This code here is a function that, when called, returns our middleware function that we used in the previous two files. Our function takes a type
. In our case, this type
will be either “car” or “model.” We use this type to make sure we search through the correct piece of data for our object and then to make sure we add the correct piece of data to the request. Now we can use this to clean up our files from before. Replace the code we just added to the following in each of the given files.
// routes/models/index.js
const findObject = require('../../utils/findObject');
models.param('modelId', findObject('model'));
// routes/cars/index.js
const findObject = require('../../utils/findObject');
cars.param('carId', findObject('car'));
So much cleaner! If you view things in the browser, you will see that things still work the same, yet the code is cleaner and more modularized. We can also clean up some of our route handlers since we have access to that object in the request by the time the handler is hit.
// routes/models/single.js
module.exports = (req, res) => {
const model = req.model;
res.status(200).json({ model });
};
// routes/cars/single.js
module.exports = (req, res) => {
const car = req.car;
res.status(200).json({ car });
};
If you try this in your browser, it still works! Beautiful!
Routing is super important once apps start getting bigger and more complex. Having a good base for your routes will help you out in the future. Using this technique will make sure that you never have to spend a ton of time looking for a route handler. Just make sure the file structure follows the URL structure, and you’re good to go! Please leave any comments or feedback in the comments below!
]]>React 16.6.0 is released! With it comes a host of new features including the two big ones:
React.memo()
React.lazy():
Code-splitting and lazy-loading with React Suspense
We’ll focus on React.memo()
for this article and React.lazy()
and Suspense
in an upcoming larger article.
React.memo() is similar to PureComponent in that it will help us control when our components rerender.
Components will only rerender if their props have changed!
Normally all of our React components in our tree will go through a render when changes are made. With PureComponent
and React.memo()
, we can have only some components render.
const MyMemoizedComponent = React.memo(function MyComponent(props) {
// only renders if props have changed
});
This is a performance boost since only the things that need to be rendered are rendered.
PureComponent
works with classes. React.memo()
works with functional components.
import React from 'react';
const MyMemoizedComponent = React.memo(function MyComponent(props) {
// only renders if props have changed!
});
// can also be an es6 arrow function
const MyArrowFunctionMemoizedComponent = React.memo(props => {
return <div>my memoized component</div>;
});
// and even shorter with implicit return
const MyImplicitReturnMemoizedComponent = React.memo(props => (
<div>implicit memoized component</div>
));
Since React.memo()
is a higher-order component, you can use it to wrap a functional component you already have.
const RocketComponent = props => <div>my rocket component. {props.fuel}!</div>;
// create a version that only renders on prop changes
const MemoizedRocketComponent = React.memo(RocketComponent);
I tried creating a quick demo to show the render happen and also not happen if a component hasn’t changed. Unfortunately, the React Developer Tools hasn’t fully implemented the React.memo()
stuff yet.
If you look at components, it shows TODO_NOT_IMPLEMENTED_YET
:
Once DevTools is updated, we’ll be able to see which components are being rendered. The memoized component should not trigger a render if its props haven’t changed!
And here’s the demo app:
https://codesandbox.io/s/53wj3rr3nn?runonclick=1&codemirror=1
Per Wikipedia:
In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.
This makes sense since that’s exactly what React.memo()
does! Check to see if an upcoming render will be different than the previous render. If they are the same, keep the previous one.
This is a great addition to React as I’ve always written things in the class form just to take advantage of PureComponent. Now we can have our cake (functional components) and eat it too (render only on changes) with React.memo()
!
While implementing pagination in mobile devices, one has to take a different approach since space is minimal, unlike the web, due to this factor, infinite scrolling has always been the go-to solution, giving your users a smooth and desirable experience.
In this tutorial, we will be building an infinite scroll list using the FlatList component in React Native, we will be consuming Punk API which is a free beer catalog API.
Here’s a small demo video of what the end result will look like:
We will be using create-react-native-app
to bootstrap our React Native app, run the following command to install it globally:
- npm install -g create-react-native-app
Next, we need to bootstrap the app in your preferred directory:
- react-native init react_native_infinite_scroll_tutorial
I’ll be using an android emulator for this tutorial but the code works for both iOS and Android platforms. In case you don’t have an android emulator setup follow the instructions provided in the Android documentation.
Make sure your emulator is up and running then navigate to your project directory and run the following command:
- react-native run-android
This should download all required dependencies and install the app on your emulator and then launch it automatically, You should have a screen with the default text showing as follows:
Now that we have our sample app up and running, we will now install the required dependencies for the project, we will be using the Axios for making requests to the server and Glamorous Native for styling our components, run the following command to install them:
- npm install -S axios glamorous-native
Directory structure is always crucial in an application, since this is a simple demo app, we’ll keep this as minimal as possible:
src
├── App.js
├── components
│ ├── BeerPreviewCard
│ │ ├── BeerPreviewCard.js
│ │ └── index.js
│ ├── ContainedImage
│ │ └── index.js
│ └── Title
│ └── index.js
├── config
│ └── theme.js
└── utils
└── lib
└── axiosService.js
In order to make our Axios usage easy, we will create a singleton instance of the Axios service that we can import across our components:
import axios from 'axios';
const axiosService = axios.create({
baseURL: 'https://api.punkapi.com/v2/',
timeout: 10000,
headers: {
'Content-Type': 'application/json'
}
});
// singleton instance
export default axiosService;
Next, we will create cards to display our beer data and add some designs to it.
theme.js
This file contains the app color palette which we will use across the app.
export const colors = {
french_blue: '#3f51b5',
deep_sky_blue: '#007aff',
white: '#ffffff',
black: '#000000',
veryLightPink: '#f2f2f2'
};
Title.js
This file contains the card text component that we will use to display the beer name in the card.
import glamorous from 'glamorous-native';
import { colors } from '../../config/theme';
const Title = glamorous.text((props, theme) => ({
fontFamily: 'robotoRegular',
fontSize: 16,
color: props.color || colors.black,
lineHeight: 24,
textAlign: props.align || 'left',
alignSelf: props.alignSelf || 'center'
}));
export default Title;
ContainedImage.js
This file contains our image component which will have a resizeMode
of contained in order to have the image fit within its containing component.
import React from 'react';
import glamorous from 'glamorous-native';
const CardImageContainer = glamorous.view((props, theme) => ({
flex: 1,
alignItems: 'stretch'
}));
const StyledImage = glamorous.image((props, theme) => ({
position: 'absolute',
top: 0,
left: 0,
bottom: 0,
right: 0
}));
const ContainedImage = props => {
return (
<CardImageContainer>
<StyledImage resizeMode="contain" {...props} />
</CardImageContainer>
);
};
export default ContainedImage;
BeerPreviewCard.js
This file contains the main card container, this is where we combine the title component and the image component to form a card that displays the beer name and image.
import React from 'react';
import glamorous from 'glamorous-native';
// app theme colors
import { colors } from '../../config/theme';
// components
import Title from '../Title';
import ContainedImage from '../ContainedImage';
const CardContainer = glamorous.view((props, theme) => ({
height: 160,
width: '85%',
left: '7.5%',
justifyContent: 'space-around'
}));
const CardImageContainer = glamorous.view((props, theme) => ({
flex: 1,
alignItems: 'stretch'
}));
const BeerNameContainer = glamorous.view((props, theme) => ({
height: '30%',
backgroundColor: colors.deep_sky_blue,
justifyContent: 'center'
}));
const BeerPreviewCard = ({ name, imageUrl }) => {
return (
<CardContainer>
<CardImageContainer>
<ContainedImage source={{ uri: imageUrl }} />
</CardImageContainer>
<BeerNameContainer>
<Title align="center" color={colors.white}>
{name}
</Title>
</BeerNameContainer>
</CardContainer>
);
};
export default BeerPreviewCard;
The logic for fetching beers will be in App.js
which is the main component of the app, we need to consume the API by making a GET request to fetch a list of paginated beers:
import React, { Component } from 'react';
// axios service
import axiosService from './utils/lib/axiosService';
export default class AllBeersScreen extends Component {
state = {
data: [],
page: 1,
loading: true,
error: null
};
componentDidMount() {
this._fetchAllBeers();
}
_fetchAllBeers = () => {
const { page } = this.state;
const URL = `/beers?page=${page}&per_page=10`;
axiosService
.request({
url: URL,
method: 'GET'
})
.then(response => {
this.setState((prevState, nextProps) => ({
data:
page === 1
? Array.from(response.data)
: [...this.state.data, ...response.data],
loading: false
}));
})
.catch(error => {
this.setState({ error, loading: false });
});
};
render() {
return (
// map through beers and display card
);
}
}
So what is a FlatList component? I’ll quote the React Native docs which describes it as a performant interface for rendering simple, flat lists, supporting the following features:
We will be using a few features from the above list for our app namely footer, pull to refresh, and scroll loading.
To use the FlatList component, you have to pass two main props which are RenderItem
and data
We can now pass the data we fetched earlier on to the FlatList
component and use the BeerPreviewCard
component to render a basic FlatList
as follows:
export default class AllBeersScreen extends Component {
// fetch beer request and update state from earlier on
render() {
return (
<FlatList
contentContainerStyle={{
flex: 1,
flexDirection: 'column',
height: '100%',
width: '100%'
}}
data={this.state.data}
keyExtractor={item => item.id.toString()}
renderItem={({ item }) => (
<View
style={{
marginTop: 25,
width: '50%'
}}
>
<BeerPreviewCard name={item.name} imageUrl={item.image_url} />
</View>
)}
/>
);
}
Reload your app and you should a view similar to this:
The main feature of infinite scrolling is loading content on-demand as the user scrolls through the app, to achieve this, the FlatList
component requires two props namely onEndReached
and onEndReachedThreshold
.
onEndReached
is the callback called when the users scroll position is close to the onEndReachedThreshold
of the rendered content, onEndReachedThreshold
is basically a number that indicates the user’s scroll position in relation to how far it is from the end of the visible content when the user reaches the specified position, the onEndReached
callback is triggered.
A value of 0.5
will trigger onEndReached
when the end of the content is within half the visible length of the list, which is what we need for this use case.
export default class AllBeersScreen extends Component {
state = {
data: [],
page: 1,
loading: true,
loadingMore: false,
error: null
};
// fetch beer request and update state from earlier on
_handleLoadMore = () => {
this.setState(
(prevState, nextProps) => ({
page: prevState.page + 1,
loadingMore: true
}),
() => {
this._fetchAllBeers();
}
);
};
render() {
return (
<FlatList
contentContainerStyle={{
flex: 1,
flexDirection: 'column',
height: '100%',
width: '100%'
}}
data={this.state.data}
renderItem={({ item }) => (
<View
style={{
marginTop: 25,
width: '50%'
}}
>
<BeerPreviewCard name={item.name} imageUrl={item.image_url} />
</View>
)}
onEndReached={this._handleLoadMore}
onEndReachedThreshold={0.5}
initialNumToRender={10}
/>
);
}
}
If you go back to the app and scroll down, you’ll notice the beer list has been automatically loaded as you scroll down (see the demo at the start of the tutorial).
The footer is basically the bottom part of our FlatList
component, when the user scrolls down we want to show a loader when the content is been fetched, we can achieve this using the ListFooterComponent
prop where we will pass a function that returns an ActivityIndicator component wrapped in a View component:
_renderFooter = () => {
if (!this.state.loadingMore) return null;
return (
<View
style={{
position: 'relative',
width: width,
height: height,
paddingVertical: 20,
borderTopWidth: 1,
marginTop: 10,
marginBottom: 10,
borderColor: colors.veryLightPink
}}
>
<ActivityIndicator animating size="large" />
</View>
);
};
render() {
return (
<FlatList
// other props
ListFooterComponent={this._renderFooter}
/>
);
}
Now when scrolling a loader will show on the screen while the content is loading (see the demo at the start of the tutorial)
Pull to refresh functionality is widely used in almost every modern application that uses network activity to fetch data, to achieve this in the FlatList
, we need to pass the onRefresh
prop which triggers a callback when the user carries a pull-down gesture at the top of the screen:
_handleRefresh = () => {
this.setState(
{
page: 1,
refreshing: true
},
() => {
this._fetchAllBeers();
}
);
};
render() {
return (
<FlatList
// other props
onRefresh={this._handleRefresh}
refreshing={this.state.refreshing}
/>
);
}
Now when you try pulling down from the top part of the screen a loader will appear from the top and the content will be refetched.
initialNumToRender
- This is the number of items we want to render when the app loads the data.
keyExtractor
- Used to extract a unique key for a given item at the specified index.
Infinite scrolling grants your users a smooth experience while using your app and is an easy way for you to deliver presentable and well-ordered content for your users.
You can access the code here.
]]>As the topic implies, we are going to be building a To-Do application with React. Do not expect any surprises such as managing state with a state management library like Flux or Redux. I promise it will strictly be React. Maybe in the following articles, we can employ something like Redux but we want to focus on React and make sure everybody is good with React itself.
You don’t need much requirements to setup this project because we will make use of CodePen for demos. You can follow the demo or setup a new CodePen pen. You just need to import React and ReactDOM library:
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>To-Do</title>
</head>
<body>
<div class="container">
<div id="container" class="col-md-8 col-md-offset-2"> </div>
</div>
<script src="https://fb.me/react-15.1.0.js"></script>
<script src="https://fb.me/react-dom-15.1.0.js"></script>
</body>
</html>
ReactDOM is a standalone library that is used to render React components on the DOM.
There are two types of components. These types are not just react-based but can be visualized in any other component-based UI library or framework. They include:
Presentation Component: These are contained components that are responsible for UI. They are composed with JSX and rendered using the render method. The key rule about this type of component is that they are stateless meaning that no state of any sort is needed in such components. Data is kept in sync using props
.
If all that a presentation component does is render HTML based on props
, then you can use stateless function to define the component rather than classes.
Container Component: This type of component complements the presentation component by providing states. It’s always the guy at the top of the family tree, making sure that data is coordinated.
You do not necessarily need a state management tool outside of what React provides if what you are building does not have too many nested children and less complex. A To-Do is simple so we can do with what React offers for now provided we understand how and when to use a presentation or container component
It is a recommended practice to have a rough visual representation of what you are about to build. This practice is becoming very important when it comes to component-based designs because it is easier to recognize presentation components.
Your image must not be a clean sketch made with a sketch app. It can just be pencil work. The most important thing is that you have a visual representation of the task at hand.
From the above diagram, we can fish out our presentation components:
Functional components (a.k.a stateless components) are good for presentation components because they are simple to manage and reason about when compared with class components.
For that sake, we will create the first presentation component, TodoForm
, with a functional component:
const TodoForm = ({addTodo}) => {
// Input tracker
let input;
return (
<div>
<input ref={node => {
input = node;
}} />
<button onClick={() => {
addTodo(input.value);
input.value = '';
}}>
+
</button>
</div>
);
};
Functional components just receive props (which we destructured with ES6) as arguments and return JSX to be rendered. TodoForm
has just one prop which is a handler that handles the click event for adding a new todo.
The value of the input is passed to the input
member variable using React’s ref.
These components present the list of to-do. TodoList
is a ul
element that contains a loop of Todo
components (made of li
elements`):
const Todo = ({todo, remove}) => {
// Each Todo
return (<li onClick(remove(todo.id))>{todo.text}</li>);
}
const TodoList = ({todos, remove}) => {
// Map through the todos
const todoNode = todos.map((todo) => {
return (<Todo todo={todo} key={todo.id} remove={remove}/>)
});
return (<ul>{todoNode}</ul>);
}
See the Pen AXNJpJ by Chris Nwamba (@christiannwamba) on CodePen.
The remove
property is an event handler that will be called when the list item is clicked. The idea is to delete an item when it is clicked. This will be taken care of in the container component.
The only way the remove
property can be passed to the Todo
component is via its parent (not grand-parent). For this sake, in as much as the container component that will own TodoList
should handle item removal, we still have to pass down the handler from grand-parent to grand-child through the parent.
This is a common challenge that you will encounter in a nested component when building React applications. If the nesting is going to be deep, it is advised you use container components to split the hierarchy.
The title component just shows the title of the application:
const Title = () => {
return (
<div>
<div>
<h1>to-do</h1>
</div>
</div>
);
}
This will eventually become the heart of this application by regulating props and managing state among the presentation components. We already have a form and a list that are independent of each other but we need to do some tying together where needed.
// Contaner Component
// Todo Id
window.id = 0;
class TodoApp extends React.Component{
constructor(props){
// Pass props to parent class
super(props);
// Set initial state
this.state = {
data: []
}
}
// Add todo handler
addTodo(val){
// Assemble data
const todo = {text: val, id: window.id++}
// Update data
this.state.data.push(todo);
// Update state
this.setState({data: this.state.data});
}
// Handle remove
handleRemove(id){
// Filter all todos except the one to be removed
const remainder = this.state.data.filter((todo) => {
if(todo.id !== id) return todo;
});
// Update state with filter
this.setState({data: remainder});
}
render(){
// Render JSX
return (
<div>
<Title />
<TodoForm addTodo={this.addTodo.bind(this)}/>
<TodoList
todos={this.state.data}
remove={this.handleRemove.bind(this)}
/>
</div>
);
}
}
We first set up the component’s constructor by passing props to the parent class and setting the initial state of our application.
Next, we create handlers for adding and removing todo which the events are fired in TodoForm
component and Todo
component respectively. setState
method is used to update the application state at any point.
As usual, we render the JSX passing in our props which will be received by the child components.
We have been rendering our demo components to the browser without discussing how but can be seen in the CodePen samples. React abstracts rendering to a different library called ReactDOM which takes your app’s root component and renders it on a provided DOM using an exposed render method:
ReactDOM.render(<TodoApp />, document.getElementById('container'));
The first argument is the component to be rendered and the second argument is the DOM element to render on.
We could step up our game by working with an HTTP server rather than just a simple local array. We do not have to bear the weight of jQuery to make HTTP requests, rather we can make use of a smaller library like Axios.
<script src="https://npmcdn.com/axios/dist/axios.min.js"></script>
React lifecycle methods help you hook into React process and perform some actions. An example is doing something once a component is ready. This is done in the componentDidMount
lifecycle method. Lifecycle methods are just like normal class methods and cannot be used in a stateless component.
class TodoApp extends React.Component{
constructor(props){
// Pass props to parent class
super(props);
// Set initial state
this.state = {
data: []
}
this.apiUrl = 'https://57b1924b46b57d1100a3c3f8.mockapi.io/api/todos'
}
// Lifecycle method
componentDidMount(){
// Make HTTP reques with Axios
axios.get(this.apiUrl)
.then((res) => {
// Set state with result
this.setState({data:res.data});
});
}
}
Mock API is a good mock backend for building frontend apps that needs to consume an API in the future. We store the API URL provided by Mock API as a class property so it can be accessed by different members of the class just as the componentDidMount
lifecycle method is. Once there is a response and the promise resolves, we update the state using:
this.setState()
The add and remove methods now works with the API but also optimized for better user experience. We do not have to reload data when there is new todo, we just push to the existing array. Same with remove:
// Add todo handler
addTodo(val){
// Assemble data
const todo = {text: val}
// Update data
axios.post(this.apiUrl, todo)
.then((res) => {
this.state.data.push(res.data);
this.setState({data: this.state.data});
});
}
// Handle remove
handleRemove(id){
// Filter all todos except the one to be removed
const remainder = this.state.data.filter((todo) => {
if(todo.id !== id) return todo;
});
// Update state with filter
axios.delete(this.apiUrl+'/'+id)
.then((res) => {
this.setState({data: remainder});
})
}
We could keep track of the total items in our To-Do with the Title component. This one is easy, place a property on the Title component to store the count and pass down the computed count from TodoApp:
// Title
const Title = ({todoCount}) => {
return (
<div>
<div>
<h1>to-do ({todoCount})</h1>
</div>
</div>
);
}
// Todo App
class TodoApp extends React.Component{
//...
render(){
// Render JSX
return (
<div>
<Title todoCount={this.state.data.length}/>
{/* ... */}
</div>
);
}
//...
}
The app works as expected but is not pretty enough for consumption. Bootstrap can take care of that.
We violated minor best practices for brevity but most importantly, you get the idea of how to build a React app following community-recommended patterns.
As I mentioned earlier, you don’t need to use a state management library in React applications if your application is simpler. Anytime you have doubt if you need them or not, then you don’t need them. (YAGNI).
]]>Today we’ll talk about how to use one of Laravel’s lesser-known features to quickly read data from our Laravel applications. We can use Laravel artisan’s built-in php artisan tinker
to mess around with your application and things in the database.
Laravel artisan’s tinker is a REPL (read-eval-print loop). A REPL is an interactive language shell. It takes in a single user input, evaluates it, and returns the result to the user.
A quick and easy way to see the data in your database.
Wouldn’t it be nice to see the immediate output of commands like:
// see the count of all users
App\User::count();
// find a specific user and see their attributes
App\User::where('username', 'samuel')->first();
// find the relationships of a user
$user = App\User::with('posts')->first();
$user->posts;
With php artisan tinker
, we can do the above pretty quickly. Tinker is Laravel’s own REPL, based on PsySH. It allows us to interact with our applications and stop dd()
ing and die()
ing all the time. You may be familiar with littering your code with print_r()
s and dd()
s to ascertain values at points during computation.
Before we tinker with our application, let us create a demo project. Let’s call it ScotchTest. If you have the laravel installer installed on your computer, run this command.
- laravel new ScotchTest
For those without the Laravel installer on their computer, you can still use composer
to create a new Laravel project.
- composer create-project laravel/laravel ScotchTest --prefer-dist
After installing our demo Laravel project, we need to create a database and set up migrations. For this article, we will be using the default Laravel migrations. So we configure our .env
file to point to the database you created for this test. The default migrations include creating a users
table and a password_resets
table.
From the root of the project, run
- php artisan migrate
After migrating our database, we should see something similar to
By default, Laravel provides a model factory that we can use to seed our database. Now let’s begin to tinker with our application.
From the root of the Laravel project, run the following command.
- php artisan tinker
This command opens a repl
for interacting with your Laravel application. First, let’s migrate our database. While in the repl
, we can run our model factory and seed our database.
factory(App\User::class, 10)->create();
A collection of ten new users should show up on your terminal. We can then check the database to see if the users were actually created.
App\User::all();
To get the total number of users in our database, we can just call count
on the User
model.
App\User::count();
After running App\User::all()
and App\User::count()
, mine looks like this. You should get something similar to mine only difference being the data generated.
From the repl, we can create a new user. You should note that we interact with this repl
just like you would write code in your Laravel application. So to create a new user, we would do
$user = new App\User;
$user->name = "Sammy Shark";
$user->email = "sammy@example.com";
$user->save();
Now we can type $user
to the repl
and get something like this.
To delete a user, we can just do
$user = App\User::find(1);
$user->delete();
With tinker, you can check out a class or function documentation right from the repl
. But it depends on the class or function having DocBlocks
.
- doc <functionName> # replace <functionName> with function name or class FQN
Calling doc
on dd
gives us this.
We can also check out a function or class source code while in the repl
using
- show <functionName>
For example, calling show
on dd
gives us this.
Laravel Tinker is a tool that can help us easily interact with our application without having to spin up a local server. Think of a simple feature you want to test in a couple of lines you’d delete from your project, use tinker instead.
]]>For almost every form that you create, you will want some sort of validation. In React, working with and validating forms can be a bit verbose, so in this article, we are going to use a package called Formik to help us out!
Here’s a sneak peek at what we are going to create.
https://codesandbox.io/s/4203r4582w
For this demo, I’ll be using CodeSandbox. You can use CodeSandbox as well or use your local environment. Totally up to you.
Regardless of what you use for this demo, you need to start with a new React app using Create React App. In CodeSandbox, I’m going to choose to do just that.
Now that we have our initial project created, we need to install three packages.
In your terminal, you’ll need to install Formik.
- npm install Formik
I’ll do the same in the CodeSandbox dependency GUI.
Now install email-validator.
- npm install email-validator
Again installing from the CodeSandbox GUI.
- npm install Yup
And again in CodeSandbox.
Now, we can start to stub out our ValidatedFormComponent. For now, we just want to create the basics and import them into the root file in the app to see it get displayed.
index.js
So, create a new file in your src
directory called ValidatedLoginForm.js
. Inside of that file, add the basic code for a functional component.
import React from "react";
const ValidatedLoginForm = () => (
<div>
<h1>Validated Form Component</h1>
</div>
);
export default ValidatedLoginForm;
Then, include it in your index.js
file.
function App() {
return (
<div className="App">
<ValidatedLoginForm />
</div>
);
}
And you should see it displayed.
Now, let’s start with the Formik stuff. First, import Formik, Email-Valiator, and Yup in your new component.
import { Formik } from "formik";
import _ as EmailValidator from "email-validator";
import _ as Yup from "yup";
Now, let’s stub out the Formik tag with initial values. Think of initial values as setting your state initially.
You’ll also need an onSubmit
callback. This callback will take two parameters, values
and an object that we can destructure. The values
represented the input values from your form. I’m adding some placeholder code here to simulate an async login call, then logging out what the values are.
In the callback, I’m also calling the setSubmitting
function that was destructured from the second parameter. This will allow us to enable/disable the submit button while the asynchronous login call is happening.
<Formik
initialValues={{ email: "", password: "" }}
onSubmit={(values, { setSubmitting }) => {
setTimeout(() => {
console.log("Logging in", values);
setSubmitting(false);
}, 500);
}}
>
<h1>Validated Login Form</h1>
</Formik>
The Formik component uses render props to supply certain variables and functions to the form that we create. If you’re not very familiar with render props, I would take a second to check out Render Props Explained.
In short, render props are used to pass properties to children elements of a component. In this case, Formik will pass properties to our form code, which is the child. Notice that I’m using destructuring to get a reference to several specific variables and functions.
{ props => {
const {
values,
touched,
errors,
isSubmitting,
handleChange,
handleBlur,
handleSubmit
} = props;
return (
<div>
<h1>Validated Login Form</h1>
</div>
);
}}
Now, we can actually start to write the code to display the form. For what it’s worth, in the finished CodeSandbox, I also created a LoginForm**.js
component to show how basic login forms are handled from scratch. You can also use that as a reference for the form we are going to add now.
The form is pretty simple with two inputs (email and password), labels for each, and a submit button.
{ props => {
const {
values,
touched,
errors,
isSubmitting,
handleChange,
handleBlur,
handleSubmit
} = props;
return (
<form onSubmit={handleSubmit}>
<label htmlFor="email">Email</label>
<input name="email" type="text" placeholder="Enter your email" />
<label htmlFor="email">Password</label>
<input
name="password"
type="password"
placeholder="Enter your password"
/>
<button type="submit" >
Login
</button>
</form>
);
}}
Notice that the **onSubmit **is calling the handleSubmit
from the props.
I mentioned earlier that we could disable our submit button while the user is already attempting to log in. We can add that small change now by using the isSubmitting
property that we destructured from props above.
<button type="submit" disabled={isSubmitting}>
Login
</button>
I would recommend adding the CSS from the finished CodeSandbox as well. Otherwise, you won’t get the full effect. You can copy the below css into your styles.css
file.
.App {
font-family: sans-serif;
}
h1 {
text-align: center;
}
form {
max-width: 500px;
width: 100%;
margin: 0 auto;
}
label,
input {
display: block;
width: 100%;
}
label {
margin-bottom: 5px;
height: 22px;
}
input {
margin-bottom: 20px;
padding: 10px;
border-radius: 3px;
border: 1px solid #777;
}
input.error {
border-color: red;
}
.input-feedback {
color: rgb(235, 54, 54);
margin-top: -15px;
font-size: 14px;
margin-bottom: 20px;
}
button {
padding: 10px 15px;
background-color: rgb(70, 153, 179);
color: white;
border: 1px solid rgb(70, 153, 179);
background-color: 250ms;
}
button:hover {
cursor: pointer;
background-color: white;
color: rgb(70, 153, 179);
}
Now we need to figure out how to validate our inputs. The first question is, what constraints do we want to have on our input. Let’s start with email. Email input should…
Password input should…
We’ll cover two ways to create these messages, one using Yup and one doing it yourself. We recommend using Yup and you’ll see why shortly.
The first option is creating our validate function. The purpose of the function is to iterate through the values of our form, validate these values in whatever way we see fit, and return an errors
object that has key-value pairs of value
and message
.
Inside of the Formik tag, you can add the following code. This will always add an “Invalid email” error for email. We will start with this and go from there.
validate={values => {
let errors = {};
errors.email = "Invalid email";
return errors;
}}
Now, we can ensure that the user has input something for the email.
validate={values => {
let errors = {};
if (!values.email) {
errors.email = "Required";
}
return errors;
}}
Then, we can check that the email is actually a valid-looking email by using the email-validator
package. This will look almost the same as the equivalent check for email.
validate={values => {
let errors = {};
if (!values.email) {
errors.email = "Required";
} else if (!EmailValidator.validate(values.email)) {
errors.email = "Invalid email address";
}
return errors;
}}
That takes care of email, so now for password. We can first check that the user input something.
validate={values => {
let errors = {};
if (!values.password) {
errors.password = "Required";
}
return errors;
}}
Now we need to check the length to be at least 8 characters.
validate={values => {
const passwordRegex = /(?=.*[0-9])/;
if (!values.password) {
errors.password = "Required";
} else if (values.password.length < 8) {
errors.password = "Password must be 8 characters long.";
}
return errors;
}}
And lastly, that the password contains at least one number. For this, we can use regular expressions.
validate={values => {
let errors = {};
const passwordRegex = /(?=.*[0-9])/;
if (!values.password) {
errors.password = "Required";
} else if (values.password.length < 8) {
errors.password = "Password must be 8 characters long.";
} else if (!passwordRegex.test(values.password)) {
errors.password = "Invalida password. Must contain one number";
}
return errors;
}}
Here’s the whole thing.
validate={values => {
let errors = {};
if (!values.email) {
errors.email = "Required";
} else if (!EmailValidator.validate(values.email)) {
errors.email = "Invalid email address";
}
const passwordRegex = /(?=.*[0-9])/;
if (!values.password) {
errors.password = "Required";
} else if (values.password.length < 8) {
errors.password = "Password must be 8 characters long.";
} else if (!passwordRegex.test(values.password)) {
errors.password = "Invalida password. Must contain one number";
}
return errors;
}}
Ok, you might have noticed that handling the validate logic on our own gets a bit verbose. We have to manually do all of the checks ourselves. It wasn’t that bad I guess, but with the Yup package, it gets all the easier!
Yup is the recommended way to handle validation messages.
When using Yup, we no longer will see the Validate
property but instead use validationSchema
. Let’s start with email. Here is the equivalent validation using Yup.
validationSchema={Yup.object().shape({
email: Yup.string()
.email()
.required("Required")
})}
Much shorter right?! Now, for password.
validationSchema={Yup.object().shape({
email: Yup.string()
.email()
.required("Required"),
password: Yup.string()
.required("No password provided.")
.min(8, "Password is too short - should be 8 chars minimum.")
.matches(/(?=.*[0-9])/, "Password must contain a number.")
})}
Pretty sweet!
Now that we have the logic for creating error messages, we need to display them. We will need to update the inputs in our form a bit.
We need to update several properties for both email and password inputs.
Let’s start by updating value, onChange, and onBlur. Each of these will use properties from the render props.
<input
name="email"
type="text"
placeholder="Enter your email"
value={values.email}
onChange={handleChange}
onBlur={handleBlur}
/>
Then we can add a conditional “error” class if there are any errors. We can check for errors by looking at the errors object (remember how we calculated that object ourselves way back when).
We can also check the touched property, to see whether or not the user has interacted with the email input before showing an error message.
<input
name="email"
type="text"
placeholder="Enter your email"
value={values.email}
onChange={handleChange}
onBlur={handleBlur}
className={errors.email && touched.email && "error"}
/>
And lastly, if there are errors, we will display them to the user. All in all, email will look like this.
<label htmlFor="email">Email</label>
<input
name="email"
type="text"
placeholder="Enter your email"
value={values.email}
onChange={handleChange}
onBlur={handleBlur}
className={errors.email && touched.email && "error"}
/>
{errors.email && touched.email && (
<div className="input-feedback">{errors.email}</div>
)}
Now we need to do the same with password. I won’t walk through each step because they are exactly the same as email. Here’s the final code.
<label htmlFor="email">Password</label>
<input
name="password"
type="password"
placeholder="Enter your password"
value={values.password}
onChange={handleChange}
onBlur={handleBlur}
className={errors.password && touched.password && "error"}
/>
{errors.password && touched.password && (
<div className="input-feedback">{errors.password}</div>
)}
Let’s try it out! You can start by clicking the button without entering anything. You should see validation messages.
Now, we can get more specific for testing messages. Refresh your page to do this.Click inside of the email input, but don’t type anything.
Then, click away from the input. You should see the “Required” message pop up. Notice that this message doesn’t pop up automatically when the page loads. We only want to display error messages after the user has interacted with the input.
Now, start to type. You should get a message about not being a valid email.
And lastly, type in a valid-looking email, and your error message goes away.
Now, repeat for password. Click on the input, then away, and you’ll get the required message.
Then, start typing and you’ll see the length validation.
Then, type 8 or more characters that do not include a number, and you’ll see the “must contain a number” message.
And lastly, add a number, and error messages go away.
Whew, that was a long one! Again, validation can be a tricky thing, but with the help of a few packages, it becomes a bit easier. At the end of the day though, I think we’ve got a pretty legit login form!
]]>Today we’ll be creating a simple Laravel authentication. Using migrations, seeding, routes, controllers, and views, we’ll walk through the entire process.
This tutorial will walk us through:
To get our authentication working, we will need to have a database and users to log in with.
Set up your database and user. Assign that user to the database and make sure you update your settings in app/config/database.php
.
Migrations are a way we can manipulate our database within our codebase. This means we don’t have to get our hands dirty by doing any SQL commands or messing around inside a tool like phpmyadmin. For more information and the benefits of migrations, see the official docs.
Migrations are very easy to create. The easiest way to create a migration will be to use the great artisan command-line interface created by Taylor Otwell. To create the migration, via the command line, in the root folder of your application, simply type:
- php artisan migrate:make create_users_table ––create=users
This will automatically create a migrations file inside of your app/database/migrations
folder. Let’s take a look at the newly created file.
// app/database/migrations/####_##_##_######_create_users_table.php
<?php
use IlluminateDatabaseSchemaBlueprint;
use IlluminateDatabaseMigrationsMigration;
class CreateUsersTable extends Migration {
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('users', function(Blueprint $table)
{
$table->increments('id');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
//
}
}
Laravel generates the core of the migration file for you and the --create
command will let the migration create the table for you. It will create a table for you with an id field and the timestamps field. This will make created_at
and updated_at
fields. Now we use the Schema Builder to create our users table.
// app/database/migrations/####_##_##_######_create_users_table.php
<?php
use IlluminateDatabaseSchemaBlueprint;
use IlluminateDatabaseMigrationsMigration;
class CreateUsersTable extends Migration {
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('users', function(Blueprint $table)
{
$table->increments('id');
$table->string('name', 32);
$table->string('username', 32);
$table->string('email', 320);
$table->string('password', 64);
// required for Laravel 4.1.26
$table->string('remember_token', 100)->nullable();
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::drop('users');
}
}
Now this migration file will be responsible for creating the users table and also destroying it if needed. To run the migration and create our user table, use the command line again and run:
- php artisan migrate
Just like that, the command will use the up()
function, and bam! We have our users table with all the columns we wanted.
Reverting Migrations: Now if you wanted to rollback migrations, we could use php artisan migrate:rollback
or php artisan migrate:reset
.
Now that we have our table, let’s create sample users.
Seeding is the technique of filling our database with sample data so we can test and create our applications. It really does make building applications much easier. For seeder files, we won’t be using artisan. We’ll make these the good old-fashioned way… New file. In your app/database/seeds
folder, create a file called UserTableSeeder.php
.
// app/database/seeds/UserTableSeeder.php
<?php
class UserTableSeeder extends Seeder
{
public function run()
{
DB::table('users')->delete();
User::create(array(
'name' => 'Chris Sevilleja',
'username' => 'sevilayha',
'email' => 'chris@example.com',
'password' => Hash::make('awesome'),
));
}
}
We will create a user and all of the above is pretty self-explanatory aside from the password. We will use Laravel’s Hash class to create a secure Bcrypt hashing of our password. This is always a good practice to hash our password and to read more about Laravel security, check out the docs. Now that we have created our file, we need Laravel to call it. Inside the app/database/seeds/DatabaseSeeder.php
, add the line $this->call('UserTableSeeder');
.
// app/database/seeds/DatabaseSeeder.php
<?php
class DatabaseSeeder extends Seeder {
/**
* Run the database seeds.
*
* @return void
*/
public function run()
{
Eloquent::unguard();
$this->call('UserTableSeeder');
}
}
Once we’re done with our seeder file, we can inject that user into our database using:
- php artisan db:seed
Now that we have a database, a table thanks to migrations, and a user thanks to seeding, we can build the authentication system. We will need to create routes, controllers, and views for our form.
In Laravel, our routes file dictates the lay of the land in our application. We will define two routes for our login, one for the HTTP get to show the form, and one for the HTTP post request to process the form. Laravel lets us define routes based on HTTP request types and this helps to organize our application and how a user interacts around the site. For more info on this: show link. Add the following routes we need in our app/routes.php
file:
// app/routes.php
<?php
// route to show the login form
Route::get('login', array('uses' => 'HomeController@showLogin'));
// route to process the form
Route::post('login', array('uses' => 'HomeController@doLogin'));
Now if we go to our application in our browser and go to www.example.com/login, we will get an error because we haven’t defined the HomeController@showLogin function yet. Let’s do that.
In our app/controllers
directory, Laravel should already come with a HomeController.php and a BaseController.php. Inside our HomeController.php, we are going to create the two functions we need. Add these two.
// app/controllers/HomeController.php
...
public function showLogin()
{
// show the form
return View::make('login');
}
public function doLogin()
{
// process the form
}
For now, we will only deal with the function to show the form.
The easiest part of this process will be creating our login view. In the app/views
folder, create a file called login.blade.php
. The .blade.php extension lets Laravel know that we will be using its Blade Templating system.
<!-- app/views/login.blade.php --><
<!doctype html>
<html>
<head>
<title>Look at me Login</title>
</head>
<body><
{{ Form::open(array('url' => 'login')) }}
<h1>Login</h1>
<!-- if there are login errors, show them here -->
<p>
{{ $errors->first('email') }}
{{ $errors->first('password') }}
</p>
<p>
{{ Form::label('email', 'Email Address') }}
{{ Form::text('email', Input::old('email'), array('placeholder' => 'awesome@example.com')) }}
</p>
<p>
{{ Form::label('password', 'Password') }}
{{ Form::password('password') }}
</p>
<p>{{ Form::submit('Submit!') }}</p>
{{ Form::close() }}
When someone submits the form, it posts to the HomeController@doLogin function. Let’s validate the information and process the form.
If there are validation errors, they will be redirected here and the email input will already be filled in with their old input. Errors will also show if there are any.
Back in our HomeController.php, let’s build out our doLogin()
function. We have to validate the information sent to make sure that we have an email and a password. Both fields are required.
// app/controllers/HomeController.php
public function doLogin()
{
// validate the info, create rules for the inputs
$rules = array(
'email' => 'required|email', // make sure the email is an actual email
'password' => 'required|alphaNum|min:3' // password can only be alphanumeric and has to be greater than 3 characters
);
// run the validation rules on the inputs from the form
$validator = Validator::make(Input::all(), $rules);
// if the validator fails, redirect back to the form
if ($validator->fails()) {
return Redirect::to('login')
->withErrors($validator) // send back all errors to the login form
->withInput(Input::except('password')); // send back the input (not the password) so that we can repopulate the form
} else {
// create our user data for the authentication
$userdata = array(
'email' => Input::get('email'),
'password' => Input::get('password')
);
// attempt to do the login
if (Auth::attempt($userdata)) {
// validation successful!
// redirect them to the secure section or whatever
// return Redirect::to('secure');
// for now we'll just echo success (even though echoing in a controller is bad)
echo 'SUCCESS!';
} else {
// validation not successful, send back to form
return Redirect::to('login');
}
}
}
We use the Auth class to authenticate a user. Auth::attempt()
will check the plaintext password against the hashed password we saved in our database.
Try the login: Let’s try to login with whatever we put in our app/database/UserTableSeeder.php
file.
If the authentication is successful (Auth::attempt() returns true), we will redirect the user to wherever they should go. After they are logged in, that user’s information is saved and can be accessed using Auth::user()->email;
. If the authentication is not successful, we will be kicked back to the login form with errors and the old email input to populate the form.
Logging out is a simple matter. We’ll need a new route and a new function.
Add this route for logout.
// app/routes.php
...
Route::get('logout', array('uses' => 'HomeController@doLogout'));
...
Ideally, this route would be a POST
route for security purposes. This will ensure that your logout won’t be accidentally triggered. http://stackoverflow.com/questions/3521290/logout-get-or-post Also, to handle this as a POST
, you will have to handle your link differently. You’ll have to create a POST request to your logout route.
For logout, we will flush and clean out the session and then redirect our user back to the login screen. You can change this to redirect a user wherever you would like. A home page or even a sad goodbye page.
// app/controllers/HomeController.php
public function doLogout()
{
Auth::logout(); // log the user out of our application
return Redirect::to('login'); // redirect the user to the login screen
}
Now that you have the route and the function, you can create a logout button by using the Laravel URL
helper.
<!-- LOGOUT BUTTON -->
<a href="{{ URL::to('logout') }}">Logout</a>
Now going to your login page, www.example.com/login and submitting the login form will provide validation, authentication against a user in the database, and a little more understanding of how Laravel makes things like building an authentication system easier. We’ll be doing a lot more writeups on Laravel, so sound off in the comments if you have questions or anything else you want to see.
Edit Updated the doLogin()
function and the view to pass back errors and handle Auth correctly with Hash::check()
.
Edit #2 Changed back to the Auth::attempt()
login method.
Edit #3 Cleaned up and commented code examples. Provided code for download.
Edit #4 Added logout.
Edit #5 Upgraded user migration to work with Laravel 4.1.26
Bootstrap 3 (and now Bootstrap 4) are amazing CSS frameworks that can make the lives of developers of any skill-level easier. When I was more of a beginner and I first started using Bootstrap, I used every feature of it possible and used to hack it to get things to work the way I wanted. Now, with more experience, I mostly just use their reset and grid system. I now rarely alter any of its core functionality.
Bootstrap’s grid system is fantastic and near-perfect in my opinion. You can read about it here. I often see developers needing to match heights across columns while maintaining responsiveness. I’ve decided to share some of the methods I do to accomplish this, as well as some very cool tricks other developers and friends have taught me, and the general direction and solution that Bootstrap 4 is doing to address this common problem.
I’ve made a demo CodePen to illustrate the issue when the content in columns is different lengths and how it messes with design. Some quick notes first:
.cols
http://codepen.io/ncerminara/pen/PNLRXW
The first solution I’m going to use is with JavaScript. This is pretty straightforward and simply uses JavaScript to match the heights of the columns. The best, easiest, and almost the most “official” JS way is to simply use matchHeight.
There are definitely pros and cons to taking a JavaScript approach. With JavaScript, you get high cross-browser support, but you also have a bigger pageload and it won’t happen until the DOM is ready or loaded depending on when you trigger it. I like this approach though because I actually prefer to not have heights associated with my columns and instead the content in them.
Here’s more info on matchHeight.js
:
The quickest way to get started is just reference the CDN link like so after your jQuery:
<script src="//cdnjs.cloudflare.com/ajax/libs/jquery.matchHeight/0.7.0/jquery.matchHeight-min.js"><script>
MatchHeight is super easy-to-use and essentially has two main options (among a bunch of other stuff):
Here’s how to match heights on different rows:
$(function() {
$('.box').matchHeight();
});
http://codepen.io/ncerminara/pen/GZedpW
Here’s how to match the height of all elements on the page:
$(function() {
$('.box').matchHeight(false);
});
http://codepen.io/ncerminara/pen/NNJMWX
If you take either of these approaches, make sure to disable heights on mobile since the columns are all stacked it won’t matter if they’re the same height or not.
You can just override the fixed height at the media query breakpoint. This changes based on xs
, sm
, md
, or lg
). Here’s a demo when using col-sm-*
:
@media only screen and (max-width : 767px) {
.box {
height: auto !important;
}
}
The word “table” usually sets off a bunch of red flags with front-end developers, but it’s really not that bad when used right. A lot of people don’t even realize you can force div
s and its elements to behave like a table
element.
Sometimes you want to do this because the table
element’s columns have matched height as a default behavior. Here’s a CSS utility class to trick rows into thinking it’s a table
when you’re using col-sm-*
followed by a demo:
@media only screen and (min-width : 768px) {
.is-table-row {
display: table;
}
.is-table-row [class*="col-"] {
float: none;
display: table-cell;
vertical-align: top;
}
}
<div class="row is-table-row">
<div class="col-sm-4">...</div>
<div class="col-sm-4">...</div>
<div class="col-sm-4">...</div>
</div>
http://codepen.io/ncerminara/pen/EKMLXx
You’ll have to adjust this a bit based on what size column you’re using. So it would actually make sense to create multiple utility classes: is-xs-table-row
, is-sm-table-row
, is-md-table-row
, and is-lg-table-row
or just manually make sure you check for responsive.
You’ll also notice I adjusted the styles a bit because the columns now have a height (not the custom .box
element). If you take this approach, you’ll have to plan for this.
This approach is really, really cool and probably the best solution for most people. I have no idea who came up with it, but it is super creative and has many benefits:
It also has a lot of cons though:
.btn
). There are workarounds for this, but it’s unnatural a bitoverflow: hidden
Here’s a utility class for it:
.row.match-my-cols {
overflow: hidden;
}
.row.match-my-cols [class*="col-"]{
margin-bottom: -99999px;
padding-bottom: 99999px;
}
<div class="row match-my-cols">
<div class="col-sm-4">...</div>
<div class="col-sm-4">...</div>
<div class="col-sm-4">...</div>
</div>
http://codepen.io/ncerminara/pen/EKMLeP
It adds 99999px
of height to the column via padding
and then uses the negative margin
to force the position as if it is not there. Then the .row
just hides anything that is overflowed.
Flexbox is the CSS3 God’s gift to the world of grumpy front-end developers. It’s the ultimate tool for layouts and “gridding” via CSS. You can learn all about it with this Visual Guide to CSS3 Flexbox Properties.
There’s only one problem. Internet Explorer browser support is awful. IE9 and below provides zero support, IE10 is a crapshoot with it, and IE11 has many bugs. It’s really only useful to a select few privileged developers, but know Flexbox is coming and here to stay.
This method does equal heights, is super easy, is out-of-the-box responsive, and has everything you can ask for. Here’s a demo:
.row.is-flex {
display: flex;
flex-wrap: wrap;
}
.row.is-flex > [class*='col-'] {
display: flex;
flex-direction: column;
}
/*
* And with max cross-browser enabled.
* Nobody should ever write this by hand.
* Use a preprocesser with autoprefixing.
*/
.row.is-flex {
display: -webkit-box;
display: -webkit-flex;
display: -ms-flexbox;
display: flex;
-webkit-flex-wrap: wrap;
-ms-flex-wrap: wrap;
flex-wrap: wrap;
}
.row.is-flex > [class*='col-'] {
display: -webkit-box;
display: -webkit-flex;
display: -ms-flexbox;
display: flex;
-webkit-box-orient: vertical;
-webkit-box-direction: normal;
-webkit-flex-direction: column;
-ms-flex-direction: column;
flex-direction: column;
}
<div class="row is-flex">
<div class="col-sm-4">...</div>
<div class="col-sm-4">...</div>
<div class="col-sm-4">...</div>
</div>
http://codepen.io/ncerminara/pen/qZvYgR
Bootstrap 4 will have two options for its grid: “With Flexbox” and “Without Flexbox”. If you opt-in with the Flexbox option, the heights are matched automatically. You can read more about it at What’s New in Bootstrap.
Here’s an awesome demo showing the beauty of it:
http://codepen.io/ncerminara/pen/EjqbPj
The problem is still browser support and Bootstrap 4 is, as of writing this, not production-ready and in alpha version.
Bootstrap 4 also introduced a concept called “Cards”. Cards are defined as “a flexible and extensible content container. It includes options for headers and footers, a wide variety of content, contextual background colors, and powerful display options.”.
You can read more about it here.
Really, all it means is it gives you out-of-the-box Bootstrap styles for the .box
demoed in these CodePens. This is really cool though because there are many options to match height on columns. The only thing is it’s not technically part of the “grid”, but is a phenomenal solution for matching heights of columns regardless.
Here’s a demo:
http://codepen.io/ncerminara/pen/gpVXxz
What’s cool about Cards in Bootstrap 4 is if you don’t opt-in with Flexbox, it will use tables to trick the heights of the columns to match. If you do, it will use Flexbox instead. This is one of the most exciting things about Bootstrap 4 in my opinion.
Bootstrap is simply a framework. At the end of the day, it’s ultimately up to you or the developer to make it work the way you want with your design. You can use all these methods, some of these methods, or whatever. It really doesn’t matter so long you understand the pros and cons.
I personally don’t like making CSS adjustments on any base bootstrap thing: .container
, .row
, .col-*-*
. I think it’s too easy for developers to do unintentional things that alter the grid itself (like adding left or right margin or padding) and breaking the default functionality. It’s really up to you though!
A custom hook is a JavaScript function with a unique naming convention that requires -
use
andThe whole idea behind custom hooks is just so that we can extract component logic into reusable functions.
Often times as we build out React applications, we see ourselves writing almost the same exact codes in two or more different components. Ideally what we could do in such cases would be to extract that recurrent logic into a reusable piece of code (hook) and reuse it where the need be.
Before hooks, we share stateful logic between components using render props and higher-order components, however, since the introduction of hooks and since we came to understand how neat they make these concepts, it no longer made sense to keep using those. Basically, when we want to share logic between two JavaScript functions, we extract it to a third function possibly because both components and hooks are equally just functions.
The rationale behind this move is not different from what we have already explained above. Compared to using the native fetch API out of the box, abstracting it into the useFetch
hook gives us a one-liner ability, more declarative code style, reusable logic and an overall cleaner code as we’ll see in a minute. Consider this simple useFetch example:
const useFetch = (url, options) => {
const [response, setResponse] = React.useState(null);
useEffect(async () => {
const res = await fetch(url, options);
const json = await res.json();
setResponse(json);
});
return response;
};
Here, the effect hook called useEffect is used to perform major functions —
Also, notice that the promise resolving happens with async/await.
The effect hook runs on two occasions — When the component mounts and also when the component updates. What this means is, if nothing is done about the useFetch example above, we will most definitely run into a scary recurrent loop cycle. Why? Because we are setting the state after every data fetch, as a result, when we set the state, the component updates and the effect runs again.
Obviously, this will result in an infinite data fetching loop and we don’t want that. What we do want, is to only fetch data when the component mounts and we have a neat way of doing it. All we have to do is provide an empty array as a second argument to the effect hook, this will stop it from activating on component updates but only when the component is mounted.
useEffect(async () => {
const res = await fetch(url, options);
const json = await res.json();
setResponse(json);
}, []); // empty array
The second is an array containing all the variables on which the hook depends on. If any of the variables change, the hook runs again, but if the argument is an empty array, the hook doesn’t run when updating the component since there are no variables to watch.
You may have noticed that in the effect hook above, we are using async/await to fetch data. However, according to documentation stipulations, every function annotated with async returns an implicit promise. So in our effect hook, we are returning an implicit promise whereas an effect hook should only return either nothing or a clean-up function.
So by design, we are already breaking this rule because —
As a result, if we go ahead with the code as is, we will get a warning in the console pointing out the fact that useEffect function must return a cleanup function or nothing.
Warning: An effect function must not return anything besides a function, which is used for clean-up.
It looks like you wrote useEffect(async () => ...) or returned a Promise. Instead, write the async function inside your effect and call it immediately:
useEffect(() => {
async function fetchData() {
// You can await here
const response = await MyAPI.getData(someId);
// ...
}
fetchData();
}, [someId]); // Or [] if effect doesn't need props or state
Learn more about data fetching with Hooks: https://fb.me/react-hooks-data-fetching
Simply put, using async functions directly in the useEffect()
function is frowned at. What we can do to fix this is exactly what is recommended in the warning above. Write the async function and use it inside the effect.
React.useEffect(() => {
const fetchData = async () => {
const res = await fetch(url, options);
const json = await res.json();
setResponse(json);
};
fetchData();
}, []);
Instead of using the async function directly inside the effect function, we created a new async function fetchData()
to perform the fetching operation and simply call the function inside useEffect. This way, we abide by the rule of returning nothing or just a cleanup function in an effect hook. And if you should check back on the console, you won’t see any more warnings.
One thing we haven’t mentioned or covered so far is how we can handle error boundaries in this concept. Well, it’s not complicated, when using async/await, it is common practice to use the good old try/catch
construct for error handling and thankfully it will also work for us here.
const useFetch = (url, options) => {
const [response, setResponse] = React.useState(null);
const [error, setError] = React.useState(null);
React.useEffect(() => {
const fetchData = async () => {
try {
const res = await fetch(url, options);
const json = await res.json();
setResponse(json);
} catch (error) {
setError(error);
}
};
fetchData();
}, []);
return { response, error };
};
Here, we used the very popular JavaScript try/catch syntax to set and handle error boundaries. The error itself is just another state initialized with a state hook so whenever the hook runs, the error state resets. However, whenever there is an error state, the component renders feedback to the user or practically you can perform any desired operation with it.
You may already know this, but I still feel that it’ll be helpful to point out that you can use hooks to handle loading states for your fetching operations. The good thing is, It’s just another state variable managed by a state hook so if we wanted to implement a loading state in our last example, we’ll set the state variable and update our useFetch()
function accordingly.
const useFetch = (url, options) => {
const [response, setResponse] = React.useState(null);
const [error, setError] = React.useState(null);
const [isLoading, setIsLoading] = React.useState(false);
React.useEffect(() => {
const fetchData = async () => {
setIsLoading(true);
try {
const res = await fetch(url, options);
const json = await res.json();
setResponse(json);
setIsLoading(false)
} catch (error) {
setError(error);
}
};
fetchData();
}, []);
return { response, error, isLoading };
};
We cannot complete this tutorial without working on a hands-on demonstration to put everything we’ve talked about in practice. Let’s build a mini-app that will fetch a bunch of dog images and their names. We’ll use useFetch to call the very good dog API for the data we’ll need for this app.
First, we define our useFetch()
function which is exactly the same as what we did before. We will simply reuse the one we created while demonstrating error handling above to explain the data fetching concept in practice since it already has most of the things we’ll need.
const useFetch = (url, options) => {
const [response, setResponse] = React.useState(null);
const [error, setError] = React.useState(null);
React.useEffect(() => {
const fetchData = async () => {
try {
const res = await fetch(url, options);
const json = await res.json();
setResponse(json);
} catch (error) {
setError(error);
}
};
fetchData();
}, []);
return { response, error };
};
Next, we create the App()
function that will actually use our useFetch()
function to request for the dog data that we need and display it on screen.
function App() {
const res = useFetch("https://dog.ceo/api/breeds/image/random", {});
if (!res.response) {
return <div>Loading...</div>
}
const dogName = res.response.status
const imageUrl = res.response.message
return (
<div className="App">
<div>
<h3>{dogName}</h3>
<div>
<img src={imageUrl} alt="avatar" />
</div>
</div>
</div>
);
}
Here, we just passed the url into the useFetch()
function with an empty options object to fetch the data for the cat. It’s really that simple, nothing elaborate or complex. Once we’ve fetched the data, we just extract it from the response object and display it on screen. Here’s a demo on Codesandbox:
https://codesandbox.io/s/nostalgic-hopper-qsl0p
Data fetching has always been an issue to contend with when building frontend-end applications, this is usually because of all the edge cases that you will need to account for. In this post, we have explained and made a small demo to explain how we can declaratively fetch data and render it on screen by using the useFetch
hook with the native fetch()
API.
File uploads are one the most commonly used features on the web. From uploading avatars to family pictures to sending documents via email, we can’t do without files on the web.
In today’s article will cover all the ways to handle files in Laravel. After reading the article, If we left something out please let us know in the comments and we’ll update the post accordingly.
Handling files is another thing Laravel has simplified in its ecosystem. Before we get started, we’ll need a few things. First, a Laravel project. There are a few ways to create a new Laravel project, but let’s stick to composer
for now.
- composer create-project --prefer-dist laravel/laravel files
Where files
is the name of our project. After installing the app, we’ll need a few packages installed, so, let’s get them out of the way. You should note that these packages are only necessary if you intend to save images to Amazon’s s3 or manipulate images like cropping, filters, etc.
- composer require league/flysystem-aws-s3-v3:~1.0 intervention/image:~2.4
After installing the dependencies, the final one is Mailtrap. Mailtrap is a fake SMTP server for development teams to test, view, and share emails sent from the development and staging environments without spamming real customers. So head over to Mailtrap and create a new inbox for testing.
Then, in welcome.blade.php
update the head tag to:
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>File uploads</title>
<style>
* {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto,
"Helvetica Neue", Arial, sans-serif, "Apple Color Emoji",
"Segoe UI Emoji", "Segoe UI Symbol";
}
</style>
Modify the body contents to:
<form action="/process" enctype="multipart/form-data" method="POST">
<p>
<label for="photo">
<input type="file" name="photo" id="photo">
</label>
</p>
<button>Upload</button>
{{ csrf_field() }}
</form>
For the file upload form, the enctype="multipart/form-data"
and method="POST"
are extremely important as the browser will know how to properly format the request. {{ csrf_field() }}
is Laravel specific and will generate a hidden input field with a token that Laravel can use to verify the form submission is legit.
If the CSRF token does not exist on the page, Laravel will show “The page has expired due to inactivity” page.
Now that we have our dependencies out of the way, let’s get started.
Development, as we know it in 2018, is growing fast, and in most cases, there are many solutions to one problem. Take file hosting, for example, now we have so many options to store files, the sheer number of solutions ranging from self-hosted to FTP to cloud storage to GFS and many others.
Since Laravel is framework that encourages flexibility, it has a native way to handle the many file structures. Be it local, Amazon’s s3, Google’s Cloud, Laravel has you covered.
Laravel’s solution to this problem is to call them disks. Makes sense, any file storage system you can think of can be labeled as a disk in Laravel. In this regard, Laravel comes with native support for some providers (disks). We have local, public, s3, Rackspace, FTP, etc. All this is possible because of Flysystem.
If you open config/filesystems.php
you’ll see the available disks and their respected configuration.
From the introduction section above, we have a form with a file input ready to be processed. We can see that the form is pointed to /process
. In routes/web.php
, we define a new POST /process
route.
use Illuminate\Http\Request;
Route::post('process', function (Request $request) {
$path = $request->file('photo')->store('photos');
dd($path);
});
What the above code does is grab the photo field from the request and save it to the photos folder. dd()
is a Laravel function that kills the running script and dumps the argument to the page. For me, the file was saved to photos/3hcX8yrOs2NYhpadt4Eacq4TFtpVYUCw6VTRJhfn.png
. To find this file on the file system, navigate to storage/app
and you’ll find the uploaded file.
If you don’t like the default naming pattern provided by Laravel, you can provide yours using the storeAs
method.
Route::post('process', function (Request $request) {
// cache the file
$file = $request->file('photo');
// generate a new filename. getClientOriginalExtension() for the file extension
$filename = 'profile-photo-' . time() . '.' . $file->getClientOriginalExtension();
// save to storage/app/photos as the new $filename
$path = $file->storeAs('photos', $filename);
dd($path);
});
After running the above code, I got photos/profile-photo-1517311378.png
.
In config/filesystems.php
you can see the disks local and public defined. By default, Laravel uses the local disk configuration. The major difference between local and the public disk is that local is private and cannot be accessed from the browser while public can be accessed from the browser.
Since the public
disk is in storage/app/public
and Laravel’s server root is in public
you need to link storage/app/public
to Laravel’s public
folder. We can do that with our trusty artisan
by running php artisan storage:link
.
Since Laravel doesn’t provide a function to upload multiple files, we need to do that ourselves. It’s not much different from what we’ve been doing so far, we just need a loop.
First, let’s update our file upload input to accept multiple files.
<input type="file" name="photos[]" id="photo" multiple>
When we try to process this $request->file(‘photos’), it’s now an array of UploadedFile instances so we need to loop through the array and save each file.
Route::post('process', function (Request $request) {
$photos = $request->file('photos');
$paths = [];
foreach ($photos as $photo) {
$extension = $photo->getClientOriginalExtension();
$filename = 'profile-photo-' . time() . '.' . $extension;
$paths[] = $photo->storeAs('photos', $filename);
}
dd($paths);
});
After running this, I got the following array, since I uploaded a GIF and a PNG:
array:2 [▼
0 => "photos/profile-photo-1517315875.gif"
1 => "photos/profile-photo-1517315875.png"
]
Validation for file uploads is extremely important. Apart from preventing users from uploading the wrong file types, it’s also for security. Let me give an example regarding security. There’s a PHP configuration option cgi.fix_pathinfo=1
. What this does is when it encounters a file like https://example.com/images/evil.jpg/nonexistent.php
, PHP will assume nonexistent.php is a PHP file and it will try to run it. When it discovers that nonexistent.php
doesn’t exist, PHP will be like “I need to fix this ASAP” and try to execute evil.jpg
(a PHP file disguised as a JPEG). Because evil.jpg
wasn’t validated when it was uploaded, a hacker now has a script they can freely run live on your server… Not… good.
To validate files in Laravel, there are so many ways, but let’s stick to controller validation.
Route::post('process', function (Request $request) {
// validate the uploaded file
$validation = $request->validate([
'photo' => 'required|file|image|mimes:jpeg,png,gif,webp|max:2048'
// for multiple file uploads
// 'photo.*' => 'required|file|image|mimes:jpeg,png,gif,webp|max:2048'
]);
$file = $validation['photo']; // get the validated file
$extension = $file->getClientOriginalExtension();
$filename = 'profile-photo-' . time() . '.' . $extension;
$path = $file->storeAs('photos', $filename);
dd($path);
});
For the above snippet, we told Laravel to make sure the field with the name of the photo is required, a successfully uploaded file, it’s an image, it has one of the defined mime types, and it’s a max of 2048 kilobytes ~~ 2 megabytes.
Now, when a malicious user uploads a disguised file, the file will fail validation and if for some weird reason you leave cgi.fix_pathinfo
on, this is not a means by which you can get PWNED!!!
If you head over to Laravel’s validation page you’ll see a whole bunch of validation rules.
Okay, your site is now an adult, it has many visitors and you decide it’s time to move to the cloud. Or maybe from the beginning, you decided your files will live on a separate server. The good news is Laravel comes with support for many cloud providers, but, for this tutorial, let’s stick with Amazon.
Earlier we installed league/flysystem-aws-s3-v3
through composer
. Laravel will automatically look for it if you choose to use Amazon S3 or throw an exception.
To upload files to the cloud, just use:
$request->file('photo')->store('photos', 's3');
For multiple file uploads:
foreach ($photos as $photo) {
$extension = $photo->getClientOriginalExtension();
$filename = 'profile-photo-' . time() . '.' . $extension;
$paths[] = $photo->storeAs('photos', $filename, 's3');
}
Users may have already uploaded files before you decide to switch to a cloud provider, you can check the upcoming sections for what to do when files already exist.
Note: You’ll have to configure your Amazon S3 credentials in config/filesystems.php
.
Before we do this, let’s quickly configure our mail environment. In .env
file you will see this section
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
We need a username and password which we can get at Mailtrap.io. Mailtrap is really good for testing emails during development as you don’t have to crowd your email with spam. You can also share inboxes with team members or create separate inboxes.
First, create an account and login:
https://www.youtube.com/watch?v=xOAPpZSkMIQ
After copying credentials, we can modify .env
to:
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=USERNAME
MAIL_PASSWORD=PASSWORD
MAIL_ENCRYPTION=null
Don’t bother using mine, I deleted it.
Create your mailable
- php artisan make:mail FileDownloaded
Then, edit its build method and change it to:
public function build()
{
return $this->from('files@mailtrap.io')
->view('emails.files_downloaded')
->attach(storage_path('app/file.txt'), [
'as' => 'secret.txt'
]);
}
As you can see from the method above, we pass the absolute file path to the attach() method and pass an optional array where we can change the name of the attachment or even add custom headers. Next, we need to create our email view.
Create a new view file in resources/views/emails/files_downloaded.blade.php
and place the content below.
<h1>Only you can stop forest fires</h1>
<p>Lorem, ipsum dolor sit amet consectetur adipisicing elit. Labore at reiciendis consequatur, ea culpa molestiae ad minima est quibusdam ducimus laboriosam dolorem, quasi sequi! Atque dolore ullam nisi accusantium. Tenetur!</p>
Now, in routes/web.php
we can create a new route and trigger a mail when we visit it.
use App\Mail\FileDownloaded;
Route::get('mail', function () {
$email = 'bruce.wayne@batcave.io';
Mail::to($email)->send(new FileDownloaded);
dd('done');
});
If you head over to Mailtrap, you should see this.
In an application, it’s not every time we process files through uploads. Sometimes, we decide to defer cloud file uploads till a certain user action is complete. Other times we have some files on disk before switching to a cloud provider. For times like this, Laravel provides a convenient Storage
facade. For those who don’t know, facades in Laravel are class aliases. So instead of doing something like Symfony\File\Whatever\Long\Namespace\UploadedFile
, we can do Storage
instead.
Choosing a disk to upload a file. If no disk is specified, Laravel looks in config/filesystems.php
and uses the default disk.
Storage::disk('local')->exists('file.txt');
Use default cloud provider:
// Storage::disk('cloud')->exists('file.txt'); will not work so do:
Storage::cloud()->exists('file.txt');
Create a new file with contents:
Storage::put('file.txt', 'Contents');
Prepend to file:
Storage::prepend('file.txt', 'Prepended Text');
Append to file:
Storage::append('file.txt', 'Prepended Text');
Get file contents:
Storage::get('file.txt')
Check if file exists:
Storage::exists('file.txt')
Force file download:
Storage::download('file.txt', $name, $headers); // $name and $headers are optional
Generate publicly accessible URL:
Storage::url('file.txt');
Generate a temporary public URL (i.e., files that won’t exist after a set time). This will only work for cloud providers as Laravel doesn’t yet know how to handle the generation of temporary URLs for the local disk.
Storage::temporaryUrl('file.txt’, now()->addMinutes(10));
Get file size:
Storage::size('file.txt');
Last modified date:
Storage::lastModified('file.txt')
Copy files:
Storage::copy('file.txt', 'shared/file.txt');
Move files:
Storage::move('file.txt', 'secret/file.txt');
Delete files:
Storage::delete('file.txt');
To delete multiple files:
Storage::delete(['file1.txt', 'file2.txt']);
Resizing images, adding filters, etc. This is where Laravel needs external help. Adding this feature natively to Laravel will only bloat the application since no installs need it. We need a package called intervention/image
. We already installed this package, but for reference.
- composer require intervention/image
Since Laravel can automatically detect packages, we don’t need to register anything. If you are using a version of Laravel lesser than 5.5 read this.
To resize an image
$image = Image::make(storage_path('app/public/profile.jpg'))->resize(300, 200);
Even Laravel’s packages are fluent.
You can head over to their website and see all the fancy effects and filters you can add to your image.
Laravel also provides handy helpers to work with directories. They are all based on PHP iterators so they’ll provide the utmost performance.
To get all files:
Storage::files
To get all files in a directory including files in sub-folders
Storage::allFiles($directory_name);
To get all directories within a directory
Storage::directories($directory_name);
To get all directories within a directory including files in sub-directories
Storage::allDirectories($directory_name);
Make a directory
Storage::makeDirectory($directory_name);
Delete a directory
Storage::deleteDirectory($directory_name);
If we left anything out, please let us know down in the comments. Also, checkout Mailtrap, they are really good and it will help you sail through the development phase with regards to debugging emails.
]]>Want to install all of the extensions listed below at once?! Check out The Web Development Essentials Extension
Believe it or not, debugging JavaScript means more than just writing console.log() statements (although that’s a lot of it). Chrome has features built in that make debugging a much better experience. This extension gives you all (or close to all) of those debugging features right inside of VS Code!
If you want to learn more about debugging you should read Debugging JavaScript in Chrome and Visual Studio Code.
Marketplace Link: Debugger for Chrome
I loooove snippet extensions. I’m a firm believer that there’s no need to retype the same piece of code over and over again. This extensions provides you with snippets for popular pieces of modern (ES6) JavaScript code.
Side note…if you’re not using ES6 JavaScript features, you should be!
Marketplace Link: JavaScript Snippets
Want to write better code? Want consistent formatting across your team? Install ESLint. This extension can be configured to auto format your code as well as “yell” with linting errors/warnings. VS Code specifically is also perfectly configured to show you these errors/warnings.
Check out the ESLint docs for more info.
Marketplace Link: ESLint
Make changes in code editor, switch to browser, and refresh to see changes. That’s the endless cycle of a developer, but what if your browser would automatically refresh anytime you make changes? That’s where Live Server comes in!
It also runs your app on a localhost server. There are some things you can only test when running your app from a server, so this is a nice benefit.
Marketplace Link: Live Server
Brackets are the bane of a developer’s existence. With tons of nested code, it gets almost impossible to determine which brackets match up with each other. Bracket Pair Colorizer (as you might expect) colors matching brackets to make your code much more readable. Trust me, you want this!
Marketplace Link: Bracket Pair Colorizer
Need to rename an element in HTML? Well, with Auto Rename Tag, you just need to rename either the opening or closing tag, and the other will be renamed automatically. Simple, but effective!
Marketplace Link: Auto Rename Tag
Need a quick place to test out some JavaScript? I used to open up the console in Chrome and type some code right there, but there were many downsides. Quokka gives you a JavaScript (and TypeScript) scratchpad in VS Code. This means you can test out a piece of code right there in your favorite editor!
Marketplace Link: Quokka
In large projects, remembering specific file names and the directories your files are in can get tricky. This extension will provide you intellisense for just that. As you start typing a path in quotations, you will get intellisense for directories and file names. This will save you from spending a lot of time in the file explorer.
Marketplace Link: Path Intellisense
One thing I hate is switching between projects in VS Code. Every time I have to open the file explorer and find the project on my computer. But that changes with Project Manager. With this extension, you get an extra menu in your side menu for your projects. You can quickly switch between projects, save favorites, or auto-detect projects Git projects from your file system.
If you work on multiple different projects, this is a great way to stay organized and be more efficient.
Marketplace Link: Project Manager
Editor Config is a standard of a handlful of coding styles that are respected across major text editors/IDEs. Here’s how it works. You save a config file in your repository which your editor respects. In this case, you have to add an extension to VS Code for it to respect these config files. Super easy to setup and works great on team projects.
Read more on the Editor Config Docs.
Marketplace Link: Editor Config
Are you an avid Sublime user, nervous to switch over to VS Code? This extension will make you feel right at home, by changing all of the shortcuts to match those of Sublime. Now, what excuse do you have for not switching over?
Marketplace Link: Sublime Keybindings
I love the Live Server extension (mentioned above), but his extension goes another step further in terms of convenience. It gives you a live-reloading preview right inside of VS Code. No more having to tab over to your browser to see a small change!
Marketplace Link: Browser Preview
There a bunch of git extensions out there, but one is the most powerful with tons of features. You get blame information, line and file history, commit searching, and so much more. If you need help with your Git workflow, start with this extension!
Marketplace Link: **Git Lens
You know those fancy code screenshots you see in articles and tweets? Well, most likely they came from Polacode. It’s super simple to use. Copy a piece of code to your clipboard, open up the extension, paste the code, and click to save your image!
Marketplace Link: Polacode
DONT spend time formatting your code…just DONT. There’s no need to. Ealier, I mentioned ESLint which provides formatting and linting. If you don’t need the linting part, then go with Prettier. It’s super easy to setup and can be configured to formatted your code automatically on save.
Never worry about formatting again!
Marketplace Link: Prettier
This extension color codes various types of comments to give them different significance and stand out from the rest of your code. I use this ALL THE TIME for todo comments. It’s hard to ignore a big orange comment telling me I’ve got some unfinished work to do.
There are also color codes for questions, alerts, and highlights. You can also add your own!
Marketplace Link: Better Comments
If you’ve ever wanted to view a file that you’re working on in Github, this extension is for you. After installing, just right-click in your file and you’ll see the option to open it in Github. This is great for checking history, branch versions, etc. if you’re not using the Git Lens extension.
Marketplace Link: Git Link
Did you know you can customize the icons in VS Code? If you look in settings, you’ll seen an option for “File Icon Theme”. From there you can choose from the pre-installed icons or install an icon pack. This extension gives you a pretty sweet icon pack that is used by over 11 million people!
Marketplace Link: VS Code Icons
Fan of Google’s Material design? Then, check out this Material themed icon pack. There’s hundreds of different icons and they are pretty awesome looking!
Marketplace Link: Material Icon Theme
Developers, myself included, spend a lot of time customizing their dev environment, especially their text editors. With the Settings Sync extension, you can save your setting off in Github. Then, you can load them to any new version of VS Code with one command. Don’t get caught without your amazing setup ever again!
Marketplace Link: Settings Sync
If you’re the kind of person who loves perfect alignment in your code, you need to get Better Align. You can align multiple variable declarations, trailing comments, sections of code, etc. There’s no better way to get a feel for how amazing this extension is than installing it and giving it a try!
Marketplace Link: Better Align
](https://marketplace.visualstudio.com/items?itemName=vscodevim.vim)
Are you a VIM power user? Bless you if you are, but you can take all of that VIM power user knowledge and use it right inside VS Code. Not the path I personally want to go, but I know how insane productivity can be when using VIM to its potential, so more power to you.
Marketplace Link: VIM Keybindings
What are your favorite VS Code Extensions for web development? Let us know in the comments.
]]>Publish/Subscribe (commonly referred to as Pub/Sub) pattern is one of the most versatile one-way messaging patterns. You can think of a One-way messaging pattern as dropping a letter (message) in a mailbox. You send this letter out, and that is the end; you are not waiting for a response or a reply. The message you sent goes one way, which is from you: the sender to the receiver and you are not expecting a reply back.
A publisher is a part of the system that generateas data or messages while a subscriber, also part of the system that registers an inteest in receiving specific types of messages or data. The Pub/Sub pattern can be implemented in two ways: First is using a peer-peer architecture and secondly, using a message broker to serve as a mediator for the communication.
The above image illustrates the Peer-to-Peer Pub/Sub model where there is a Publisher, a node that sends out messages, and Subscribers, nodes that register interest in receiving messages from the publishers. The publisher node directly communicates with each of the subscribers. This indicates the peer-peer model where publishers send messages directly to the subscribers without a mediator. Each subscriber has to know the address or the endpoint of the publisher in order to receive messages.
Note: A node, in this instance, typically refers to an active participant in the messaging network, which could be either a service that publishes information or a service that receives information (a subscriber).
The image above shows the Pub/Sub model using a message broker. You can think of a message broker as a central hub where messages are delivered to (published) and sent out (subscriber) without these two components having to relate to each other. In this model, the publisher node or service sends a message, and the broker serves as the mediator that takes the messages from the publisher and distributes them to the subscribers. The subscriber nodes subscribe to the broker rather than the publisher directly.
The presence of a broker improves the decoupling between the system’s nodes since both the publisher and subscribers interact only with the broker.
Let’s build a real-time chat application to further demonstrate this pattern
To start our server-side implementation, we will initialize a basic Nodejs app using the command:
npm init -y
which creates a default package.json
.
Next, we will install the WebSocket (ws) dependency package that will be needed during the entire course of this build:
npm install ws
The server-side implementation will be a basic server-side chat app. We need to follow the below steps:
Create a file named app.js
in your directory and put the code below inside:
const http = require("http");
const server = http.createServer((req, res) => {
res.end("Hello Chat App");
});
const PORT = 3459;
server.listen(PORT, () => {
console.log(`Server up and running on port ${PORT}`);
});
The createServer
method on the build in http
module of Node.js will be used to setup a server. The PORT
at which the server should listen to requests was set, and the listen method was called on the server instance created to listen to incoming requests on the port specified.
Run the command: node app.js
in your terminal and you should have a response like this:
outputServer up and running on port 3459
If you make a request to this port on your browser you should have something like this as your repsonse:
Create a file call index.html
in the root directory and input the code below:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
</head>
<body>
<p>Serving HTML file</p>
</body>
</html>
This is a basic html file that renders a paragraph -> Hello. Now we have to read this file and serve it as the response whenever a request is made to our server.
// app.js
const server = http.createServer((req, res) => {
const htmlFilePath = path.join(__dirname, "index.html");
fs.readFile(htmlFilePath, (err, data) => {
if (err) {
res.writeHead(500);
res.end("Error occured while reading file");
}
res.writeHead(200, { "Content-Type": "text/html" });
res.end(data);
});
});
Here we are using the built in path module and the join function to concatenate path segments together. Then the readFile function is used to read the index.html file asynchronously. It takes in two arguments, the path of the file to be read and a callback. A 500 status code is sent to the response header and the error message is sent back to the client. if the data is read successfully, we send a 200
success status code to the response header and the response data back to the client, which in this case is the content of the file. If If no encoding is specified, like UTF-8 endoding then the raw buffer is returned, otherwise the html file is returned.
Make a request to the server on your browser and you should have this:
Indicating that our html file has been read successfully and served.
const webSocketServer = new WebSocket.Server({ server });
webSocketServer.on("connection", (client) => {
console.log("successfully connected to the client");
client.on("message", (streamMessage) => {
console.log("message", streamMessage);
distributeClientMessages(streamMessage);
});
});
const distributeClientMessages = (message) => {
for (const client of webSocketServer.clients) {
if (client.readyState === WebSocket.OPEN) {
client.send(message);
}
}
};
In the preceeding line of code, we create a new WebSocket server (webSocketServer) and attach it to our existing HTTP server (server). This will allow us handle both standard HTTP requests and WebSocket connections on the same port (3459).
The on connection event is triggered when there is a successful WebSocket conenction is established. The client in the callback function is a WebSocket connection object that represents the connection to the client, which will be used to send,receive messages and listen to events like message
from the client.
The distrubuteClientMessages function is used here to send a received messages to all connected clients. It takes in a message
argument and iterate over the connected clients to our server. It then checks for the connection state of each client (readyState === WebSocket.OPEN
). This is to ensure that the server send messages only to clients that have open connection. If the client’s connection is open, the server sends the message to that client using the client.send(message)
method.
For the client side implementation, we will modify our index.html
file a little bit.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
</head>
<body>
<p>Pub/Sub Pattern with Chat Messaging</p>
<div id="messageContainer"></div>
<form id="messageForm">
<form id="messageForm">
<input
type="text"
id="messageText"
placeholder="Send a message"
style="
padding: 10px;
margin: 5px;
border-radius: 5px;
border: 1px solid #ccc;
outline: none;
"
onfocus="this.style.borderColor='#007bff';"
onblur="this.style.borderColor='#ccc';"
/>
<input
type="button"
value="Send Message"
style="
padding: 10px;
margin: 5px;
border-radius: 5px;
background-color: #007bff;
color: white;
border: none;
cursor: pointer;
"
onmouseover="this.style.backgroundColor='#0056b3';"
onmouseout="this.style.backgroundColor='#007bff';"
/>
</form>
</form>
<script>
const url = window.location.host;
const socket = new WebSocket(`ws://${url}`);
</script>
</body>
</html>
In this piece of code we added a form element that has an input and a button for sending messages. WebSocket connections are initiated by clients and to communicate with a WebSocket-enabled server that we have setup initially,we have to create an instance of the WebSocket object specifying the ws://url
that identifies the server we want to use. The url variable when logged will have the url connection to the port where our server is listening (3459).
console.log("url", url); // localhost:3459
console.log("socket", socket); // { url: "ws://localhost:3459/", readyState: 0, bufferedAmount: 0, onopen: null, onerror: null, onclose: null, extensions: "", protocol: "", onmessage: null, binaryType: "blob" }
So, when you type in the make a request to the server in your browser, you should see this:
Let us upgrade our script so that we can send message from the client to the server and receive message from the server.
// index.html
<script>
const url = window.location.host;
const socket = new WebSocket(`ws://${url}`);
const messageContainer = document.getElementById("messageContainer");
socket.onmessage = function (eventMessage) {
eventMessage.data.text().then((text) => {
const messageContent = document.createElement("p");
messageContent.innerHTML = text;
document
.getElementById("messageContainer")
.appendChild(messageContent);
});
};
const form = document.getElementById("messageForm");
form.addEventListener("submit", (event) => {
event.preventDefault();
const message = document.getElementById("messageText").value;
socket.send(message);
document.getElementById("messageText").value = "";
});
</script>
As previously mentioned, we retrieve the url that sends a request to our server from the client side (broswer) and create a new WebSocket object instance with the url. Then, we create an event on the form element when the Send Message button is clicked. The text entered by the user on the user interface is extracted from the input element and the send method is called on the socket instance to send message to the server.
Note: In order to send a message to the server on the WebSocket connection, the send() method of the WebSocket object is usually invoked and it expects a single message argument which can be an ArrayBuffer, Blob, string or typed array. This method buffers the specified message to be transmitted and returns it before sending the message to the server.
The onmessage event called on the socket object is triggered when a message is received from the server. This is used to update the user interface of an incoming message. The eventMessage param in the callback function has the data(the message) sent from the server, but it comes back as a Blob. .text() is then used on the Blob data which returns a promise and resolved using the then() to get the actual text from the server.
Let’s test what we have. Start up the server by running
node app.js
Then, open two different browsers and make a request to http://localhost:3459/
and try sending messages between the browsers:
Let’s say our application starts growing and we try to scale it by having multiple instances of our chat server. What we want to acheive is that two different users connected to two different servers should be able to send text messages to each other successfully. Currently we have only one server and if we make a request to another server say http://localhost:3460/
, we will not have the messages for server on port 3459
, i.e only users connected to 3460
can chat with themselves. The current implementation works in away that when a chat message is sent on our working server instance, the message is distributed locally to only the clients connected to that particular server as shown when we open http://localhost:3459/
on two different browsers. Now, let’s see how we can have two different servers, integrate them so they can talk to each other
Redis is a fast and flexible in-memory data structure store. Often times it is used as a database or a cache server to cache data. Additionally, it can be used to implement a centralized Pub/Sub message exchange pattern. Redis speed and flexibility have made it a very pupular choice for sharing data in a distributed system.
The aim here is to integrate our chat servers using Redis as a message broker. Each of the server instance publish any mesage received from the client (browser) to the message broker at the same time. it subscribes for any message coming from the server instances.
Let’s modify our app.js file:
//app.js
const http = require("http");
const fs = require("fs");
const path = require("path");
const WebSocket = require("ws");
const Redis = require("ioredis");
const redisPublisher = new Redis();
const redisSubscriber = new Redis();
const server = http.createServer((req, res) => {
const htmlFilePath = path.join(__dirname, "index.html");
fs.readFile(htmlFilePath, (err, data) => {
if (err) {
res.writeHead(500);
res.end("Error occured while reading file");
}
res.writeHead(200, { "Content-Type": "text/html" });
res.end(data);
});
});
const webSocketServer = new WebSocket.Server({ server });
webSocketServer.on("connection", (client) => {
console.log("succesfully connected to the client");
client.on("message", (streamMessage) => {
redisPublisher.publish("chat_messages", streamMessage);
});
});
redisSubscriber.subscribe("chat_messages");
console.log("sub", redisSubscriber.subscribe("messages"));
redisSubscriber.on("message", (channel, message) => {
console.log("redis", channel, message);
for (const client of webSocketServer.clients) {
if (client.readyState === WebSocket.OPEN) {
client.send(message);
}
}
});
const PORT = process.argv[2] || 3459;
server.listen(PORT, () => {
console.log(`Server up and running on port ${PORT}`);
});
Here we are taking the advantage of Redis’s publish/subscribe capabilities. Two different connection instancesnwas instantiated, once for publishing messages and the other to subscribe to a channel. When a message is send from the client, we are publishing it to a Redis channel named “chat_messages” using the publisher
method on the redisPublisher
instance. The subscribe
method is called on the redisSubscribe
instance to subscribe to the same chat_message
channel. Whenever a message is published to this channel, the redisSubscriber.on
event listener is triggered. This event listener iterates over all currently connected WebSocket clients and sends the received message to each client. This is to ensure that when one user sends a message, all other users connected to any server instance receive that message in real-time.
If you start 2 different servers say:
node app.js 3459
node app.js 3460
When chat text is sent on one instance, we can now broadcast the messages across our connected servers and not to only one particular server. You can test this by running http://localhost:3459/
and http://localhost:3460/
, then send chats between then and see that the messages are broadcasted across the two servers in real-time.
You can monitor the messages published to a channel from the redis-cli
and also subscribe to the channel to get the subscribed messages:
Run the command redis-cli
. Then enter MONITOR
. Go back to your browser and send a chat, in your terminal you should see something like this assuming you send a chat text of Wow:
To see subscribed messaged published, run same command redis-cli
and enter SUBSCRIBE channelName
. channelName
in our case will be chat_messages. You should have something like this in your terminal if you send a text: Great from the browser:
Now, we can have multiple instances of our server running on different ports or even different machines, and as long as they subscribe to the same Redis channel, they can receive and broadcast messages to all connected clients, ensuring users can chat seamlessly across instances.
Remember we discussed about the Pub/Sub pattern implementation using a messgae broker in the introduction section: This example perfectly sums it up.
In the figure above, there are 2 different clients (browsers) connected to chat servers. The chat servers are interconnected, not directly, but through a Redis instance. This means that while they handle client connections independently, they share information (chat messages) through a common medium (Redis).Each chat server up there connects to Redis. This connection is used to publish messages to Redis and subscribe to Redis channels to receive messages. When a user sends a message, the chat server publishes it to the specified channel on Redis.When Redis receives a published message, it broadcasts this message to all subscribed chat servers. Each chat server then relays the message to all connected clients, ensuring that every user receives the messages sent by any user, regardless of which server they’re connected to.
This architecture allows us to horizontally scale our chat application by adding more server instances as needed. Each instance can handle its own set of connected clients, thanks to Redis’s pub/sub system capabilities, that ensures consistent message distribution across all instances. This setup is efficient for handling large numbers of simultaneous users and ensures high availability of your application.
In this tutorial we have learnt about the Publish/Subscribe pattern while creating a simple chat application to demonstrating this pattern, using Redis as a message broker. Up next is to learn how to implement a peer-to-peer messaging system in cases where a message broker might not be the best solution for example, in a complex distributed systems where a single point of failure (Broker) is not an option. You will find the complete source code of this tutorial here on GitHub.
]]>Images take up a high percentage of the size of your website. Some of these images are below the fold, which means they are not seen immediately by the website visitor. They will need to scroll down before they can view the images. Imagine if you could show only images viewed immediately, then pre-load those below the fold. This tutorial will show you how that’s done.
See the previous tutorial on how to use the Intersection Observer API to implement infinite scroll in React. If we can implement infinite scroll, then we should also be able to load images progressively. Both fall under lazy-loading in the user experience mystery land. You should refer to the article introduction to understand how Intersection Observer works.
The example we’ll be considering in this post will contain five images or more, but each of them will have this structure:
<img
src="http://res.cloudinary.com/example/image/upload/c_scale,h_3,w_5/0/example.jpg"
data-src="http://res.cloudinary.com/example/image/upload/c_scale,h_300,w_500/0/example.jpg"
>
Each tag will have a data-src
and a src
attribute:
data-src
is the actual URL for the image (width: 500px
) we want the reader to see.src
contains very small resolution of the same image (width: 5px
). This resolution will be stretched to fill up the space and give the visitor a blurred effect while the real image loads. The smaller image is 10 times smaller, so if all conditions are normal, it will load faster (10 times).The images are stored on a Cloudinary server which makes it easy to adjust the dimension of the images through the URL (h_300,w_500
or h_3,w_5
).
An observer is an instance of Intersection Observer. You create the instance and use this instance to observe a DOM element. You can observe when the element enters the viewport:
const options = {
rootMargin: '0px',
threshold: 0.1
};
const observer = new IntersectionObserver(handleIntersection, options);
The instance takes a handler and an options argument. The handler is the function called when a matched intersection occurs while the options argument defines the behavior of the observer. In this case, we want the handler to be called as soon as the image enters the viewport (threshold: 0.1).
You can use the observer to observe all images in the page:
const images = document.querySelectorAll('img');
images.forEach(img => {
observer.observe(img);
})
In the previous step, you used a method for the handler but didn’t define it. That will throw an error. Let’s create the handler above the instance:
const handleIntersection = (entries, observer) => {
entries.forEach(entry => {
if(entry.intersectionRatio > 0) {
loadImage(entry.target)
}
})
}
The method is called by the API with an entries
array and an observer
instance. The entries
stores an instance of all the matched DOM elements, or img
elements in this case. If it’s matched, the code calls loadImage
with the element.
loadImage
fetches the image and then sets the image src
appropriately:
const loadImage = (image) => {
const src = image.dataset.src;
fetchImage(src).then(() => {
image.src = src;
})
}
It does this by calling the fetchImage
method with the data-src
value. When the actual image is returned, it then sets the value of image.src
.
fetchImage
fetches the image and returns a promise:
const fetchImage = (url) => {
return new Promise((resolve, reject) => {
const image = new Image();
image.src = url;
image.onload = resolve;
image.onerror = reject;
});
}
Considering a smooth user experience, you can also add a fade-in effect to the image when transitioning from blurry to crisp. This makes it more appealing to the eyes if the load time is perceived as being slower to the viewer.
Note that IntersectionObserver
is not widely supported in all browsers, so you might consider using a polyfill or automatically loading the images once the page loads:
if ('IntersectionObserver' in window) {
// Observer code
const observer = new IntersectionObserver(handleIntersection, options);
} else {
// IO is not supported.
// Just load all the images
Array.from(images).forEach(image => loadImage(image));
}
In this tutorial, you configured lazy-loading for images on your website. This will enhance the performance of your site while also providing the user with a better experience.
]]>The Elastic Stack — formerly known as the ELK Stack — is a collection of open-source software produced by Elastic which allows you to search, analyze, and visualize logs generated from any source in any format, a practice known as centralized logging. Centralized logging can be useful when attempting to identify problems with your servers or applications as it allows you to search through all of your logs in a single place. It’s also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.
The Elastic Stack has four main components:
In this tutorial, you will install the Elastic Stack on an Ubuntu 20.04 server. You will learn how to install all of the components of the Elastic Stack — including Filebeat, a Beat used for forwarding and centralizing logs and files — and configure them to gather and visualize system logs. Additionally, because Kibana is normally only available on the localhost
, we will use Nginx to proxy it so it will be accessible over a web browser. We will install all of these components on a single server, which we will refer to as our Elastic Stack server.
Note: When installing the Elastic Stack, you must use the same version across the entire stack. In this tutorial we will install the latest versions of the entire stack which are, at the time of this writing, Elasticsearch 7.7.1, Kibana 7.7.1, Logstash 7.7.1, and Filebeat 7.7.1.
To complete this tutorial, you will need the following:
An Ubuntu 20.04 server with 4GB RAM and 2 CPUs set up with a non-root sudo user. You can achieve this by following the Initial Server Setup with Ubuntu 20.04.For this tutorial, we will work with the minimum amount of CPU and RAM required to run Elasticsearch. Note that the amount of CPU, RAM, and storage that your Elasticsearch server will require depends on the volume of logs that you expect.
OpenJDK 11 installed. See the section [Installing the Default JRE/JDK](https://www.digitalocean.com/community/tutorials/how-to-install-java-with-apt-on-ubuntu-20-04#installing-the-default-jrejdk in our guide How To Install Java with Apt on Ubuntu 20.04 to set this up.
Nginx installed on your server, which we will configure later in this guide as a reverse proxy for Kibana. Follow our guide on How to Install Nginx on Ubuntu 20.04 to set this up.
Additionally, because the Elastic Stack is used to access valuable information about your server that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged.
However, because you will ultimately make changes to your Nginx server block over the course of this guide, it would likely make more sense for you to complete the Let’s Encrypt on Ubuntu 20.04 guide at the end of this tutorial’s second step. With that in mind, if you plan to configure Let’s Encrypt on your server, you will need the following in place before doing so:
A fully qualified domain name (FQDN). This tutorial will use your_domain
throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.
Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.
your_domain
pointing to your server’s public IP address.www.your_domain
pointing to your server’s public IP address.The Elasticsearch components are not available in Ubuntu’s default package repositories. They can, however, be installed with APT after adding Elastic’s package source list.
All of the packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Packages which have been authenticated using the key will be considered trusted by your package manager. In this step, you will import the Elasticsearch public GPG key and add the Elastic package source list in order to install Elasticsearch.
To begin, use cURL, the command line tool for transferring data with URLs, to import the Elasticsearch public GPG key into APT. Note that we are using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected. Pipe the output of the cURL command into the apt-key program, which adds the public GPG key to APT.
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, add the Elastic source list to the sources.list.d
directory, where APT will search for new sources:
- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Next, update your package lists so APT will read the new Elastic source:
- sudo apt update
Then install Elasticsearch with this command:
- sudo apt install elasticsearch
Elasticsearch is now installed and ready to be configured. Use your preferred text editor to edit Elasticsearch’s main configuration file, elasticsearch.yml
. Here, we’ll use nano
:
- sudo nano /etc/elasticsearch/elasticsearch.yml
Note: Elasticsearch’s configuration file is in YAML format, which means that we need to maintain the indentation format. Be sure that you do not add any extra spaces as you edit this file.
The elasticsearch.yml
file provides configuration options for your cluster, node, paths, memory, network, discovery, and gateway. Most of these options are preconfigured in the file but you can change them according to your needs. For the purposes of our demonstration of a single-server configuration, we will only adjust the settings for the network host.
Elasticsearch listens for traffic from everywhere on port 9200
. You will want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API] (https://en.wikipedia.org/wiki/Representational_state_transfer). To restrict access and therefore increase security, find the line that specifies network.host
, uncomment it, and replace its value with localhost
like this:
. . .
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
. . .
We have specified localhost
so that Elasticsearch listens on all interfaces and bound IPs. If you want it to listen only on a specific interface, you can specify its IP in place of localhost
. Save and close elasticsearch.yml
. If you’re using nano
, you can do so by pressing CTRL+X
, followed by Y
and then ENTER
.
These are the minimum settings you can start with in order to use Elasticsearch. Now you can start Elasticsearch for the first time.
Start the Elasticsearch service with systemctl
. Give Elasticsearch a few moments to start up. Otherwise, you may get errors about not being able to connect.
- sudo systemctl start elasticsearch
Next, run the following command to enable Elasticsearch to start up every time your server boots:
- sudo systemctl enable elasticsearch
You can test whether your Elasticsearch service is running by sending an HTTP request:
- curl -X GET "localhost:9200"
You will see a response showing some basic information about your local node, similar to this:
Output{
"name" : "Elasticsearch",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ",
"version" : {
"number" : "7.7.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Now that Elasticsearch is up and running, let’s install Kibana, the next component of the Elastic Stack.
According to the official documentation, you should install Kibana only after installing Elasticsearch. Installing in this order ensures that the components each product depends on are correctly in place.
Because you’ve already added the Elastic package source in the previous step, you can just install the remaining components of the Elastic Stack using apt
:
- sudo apt install kibana
Then enable and start the Kibana service:
- sudo systemctl enable kibana
- sudo systemctl start kibana
Because Kibana is configured to only listen on localhost
, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose, which should already be installed on your server.
First, use the openssl
command to create an administrative Kibana user which you’ll use to access the Kibana web interface. As an example we will name this account kibanaadmin
, but to ensure greater security we recommend that you choose a non-standard name for your user that would be difficult to guess.
The following command will create the administrative Kibana user and password, and store them in the htpasswd.users
file. You will configure Nginx to require this username and password and read this file momentarily:
- echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
Enter and confirm a password at the prompt. Remember or take note of this login, as you will need it to access the Kibana web interface.
Next, we will create an Nginx server block file. As an example, we will refer to this file as your_domain
, although you may find it helpful to give yours a more descriptive name. For instance, if you have a FQDN and DNS records set up for this server, you could name this file after your FQDN.
Using nano or your preferred text editor, create the Nginx server block file:
- sudo nano /etc/nginx/sites-available/your_domain
Add the following code block into the file, being sure to update your_domain
to match your server’s FQDN or public IP address. This code configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601
. Additionally, it configures Nginx to read the htpasswd.users
file and require basic authentication.
Note that if you followed the prerequisite Nginx tutorial through to the end, you may have already created this file and populated it with some content. In that case, delete all the existing content in the file before adding the following:
server {
listen 80;
server_name your_domain;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
When you’re finished, save and close the file.
Next, enable the new configuration by creating a symbolic link to the sites-enabled
directory. If you already created a server block file with the same name in the Nginx prerequisite, you do not need to run this command:
- sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain
Then check the configuration for syntax errors:
- sudo nginx -t
If any errors are reported in your output, go back and double check that the content you placed in your configuration file was added correctly. Once you see syntax is ok
in the output, go ahead and restart the Nginx service:
- sudo systemctl reload nginx
If you followed the initial server setup guide, you should have a UFW firewall enabled. To allow connections to Nginx, we can adjust the rules by typing:
- sudo ufw allow 'Nginx Full'
Note: If you followed the prerequisite Nginx tutorial, you may have created a UFW rule allowing the Nginx HTTP
profile through the firewall. Because the Nginx Full
profile allows both HTTP and HTTPS traffic through the firewall, you can safely delete the rule you created in the prerequisite tutorial. Do so with the following command:
- sudo ufw delete allow 'Nginx HTTP'
Kibana is now accessible via your FQDN or the public IP address of your Elastic Stack server. You can check the Kibana server’s status page by navigating to the following address and entering your login credentials when prompted:
http://your_domain/status
This status page displays information about the server’s resource usage and lists the installed plugins.
Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. You can follow the Let’s Encrypt guide now to obtain a free SSL certificate for Nginx on Ubuntu 20.04. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.
Now that the Kibana dashboard is configured, let’s install the next component: Logstash.
Although it’s possible for Beats to send data directly to the Elasticsearch database, it is common to use Logstash to process the data. This will allow you more flexibility to collect data from different sources, transform it into a common format, and export it to another database.
Install Logstash with this command:
- sudo apt install logstash
After installing Logstash, you can move on to configuring it. Logstash’s configuration files reside in the /etc/logstash/conf.d
directory. For more information on the configuration syntax, you can check out the configuration reference that Elastic provides. As you configure the file, it’s helpful to think of Logstash as a pipeline which takes in data at one end, processes it in one way or another, and sends it out to its destination (in this case, the destination being Elasticsearch). A Logstash pipeline has two required elements, input
and output
, and one optional element, filter
. The input plugins consume data from a source, the filter plugins process the data, and the output plugins write the data to a destination.
Create a configuration file called 02-beats-input.conf
where you will set up your Filebeat input:
- sudo nano /etc/logstash/conf.d/02-beats-input.conf
Insert the following input
configuration. This specifies a beats
input that will listen on TCP port 5044
.
input {
beats {
port => 5044
}
}
Save and close the file.
Next, create a configuration file called 30-elasticsearch-output.conf
:
- sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf
Insert the following output
configuration. Essentially, this output configures Logstash to store the Beats data in Elasticsearch, which is running at localhost:9200
, in an index named after the Beat used. The Beat used in this tutorial is Filebeat:
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
}
Save and close the file.
Test your Logstash configuration with this command:
- sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
If there are no syntax errors, your output will display Config Validation Result: OK. Exiting Logstash
after a few seconds. If you don’t see this in your output, check for any errors noted in your output and update your configuration to correct them. Note that you’ll receive warnings from OpenJDK, but they should not cause any problems and can be ignored.
If your configuration test is successful, start and enable Logstash to put the configuration changes into effect:
- sudo systemctl start logstash
- sudo systemctl enable logstash
Now that Logstash is running correctly and is fully configured, let’s install Filebeat.
The Elastic Stack uses several lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. Here are the Beats that are currently available from Elastic:
In this tutorial we will use Filebeat to forward local logs to our Elastic Stack.
Install Filebeat using apt
:
- sudo apt install filebeat
Next, configure Filebeat to connect to Logstash. Here, we will modify the example configuration file that comes with Filebeat.
Open the Filebeat configuration file:
- sudo nano /etc/filebeat/filebeat.yml
Note: As with Elasticsearch, Filebeat’s configuration file is in YAML format. This means that proper indentation is crucial, so be sure to use the same number of spaces that are indicated in these instructions.
Filebeat supports numerous outputs, but you’ll usually only send events directly to Elasticsearch or to Logstash for additional processing. In this tutorial, we’ll use Logstash to perform additional processing on the data collected by Filebeat. Filebeat will not need to send any data directly to Elasticsearch, so let’s disable that output. To do so, find the output.elasticsearch
section and comment out the following lines by preceding them with a #
:
...
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
...
Then, configure the output.logstash
section. Uncomment the lines output.logstash:
and hosts: ["localhost:5044"]
by removing the #
. This will configure Filebeat to connect to Logstash on your Elastic Stack server at port 5044
, the port for which we specified a Logstash input earlier:
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
Save and close the file.
The functionality of Filebeat can be extended with Filebeat modules. In this tutorial we will use the system module, which collects and parses logs created by the system logging service of common Linux distributions.
Let’s enable it:
- sudo filebeat modules enable system
You can see a list of enabled and disabled modules by running:
- sudo filebeat modules list
You will see a list similar to the following:
OutputEnabled:
system
Disabled:
apache2
auditd
elasticsearch
icinga
iis
kafka
kibana
logstash
mongodb
mysql
nginx
osquery
postgresql
redis
traefik
By default, Filebeat is configured to use default paths for the syslog and authorization logs. In the case of this tutorial, you do not need to change anything in the configuration. You can see the parameters of the module in the /etc/filebeat/modules.d/system.yml
configuration file.
Next, we need to set up the Filebeat ingest pipelines, which parse the log data before sending it through logstash to elasticsearch. To load the ingest pipeline for the system module, enter the following command:
- sudo filebeat setup --pipelines --modules system
Next, load the index template into Elasticsearch. An Elasticsearch index is a collection of documents that have similar characteristics. Indexes are identified with a name, which is used to refer to the index when performing various operations within it. The index template will be automatically applied when a new index is created.
To load the template, use the following command:
- sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
OutputIndex setup finished.
Filebeat comes packaged with sample Kibana dashboards that allow you to visualize Filebeat data in Kibana. Before you can use the dashboards, you need to create the index pattern and load the dashboards into Kibana.
As the dashboards load, Filebeat connects to Elasticsearch to check version information. To load dashboards when Logstash is enabled, you need to disable the Logstash output and enable Elasticsearch output:
- sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601
You should receive output similar to this:
OutputOverwriting ILM policy is disabled. Set `setup.ilm.overwrite:true` for enabling.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/elastic-stack-overview/current/xpack-ml.html
Loaded machine learning job configurations
Loaded Ingest pipelines
Now you can start and enable Filebeat:
- sudo systemctl start filebeat
- sudo systemctl enable filebeat
If you’ve set up your Elastic Stack correctly, Filebeat will begin shipping your syslog and authorization logs to Logstash, which will then load that data into Elasticsearch.
To verify that Elasticsearch is indeed receiving this data, query the Filebeat index with this command:
- curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
You should receive output similar to this:
Output...
{
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 2,
"successful" : 2,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 4040,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "filebeat-7.7.1-2020.06.04",
"_type" : "_doc",
"_id" : "FiZLgXIB75I8Lxc9ewIH",
"_score" : 1.0,
"_source" : {
"cloud" : {
"provider" : "digitalocean",
"instance" : {
"id" : "194878454"
},
"region" : "nyc1"
},
"@timestamp" : "2020-06-04T21:45:03.995Z",
"agent" : {
"version" : "7.7.1",
"type" : "filebeat",
"ephemeral_id" : "cbcefb9a-8d15-4ce4-bad4-962a80371ec0",
"hostname" : "june-ubuntu-20-04-elasticstack",
"id" : "fbd5956f-12ab-4227-9782-f8f1a19b7f32"
},
...
If your output shows 0 total hits, Elasticsearch is not loading any logs under the index you searched for, and you will need to review your setup for errors. If you received the expected output, continue to the next step, in which we will see how to navigate through some of Kibana’s dashboards.
Let’s return to the Kibana web interface that we installed earlier.
In a web browser, go to the FQDN or public IP address of your Elastic Stack server. After entering the login credentials you defined in Step 2, you will see the Kibana homepage:
Click the Discover link in the left-hand navigation bar (you may have to click the the Expand icon at the very bottom left to see the navigation menu items). On the Discover page, select the predefined filebeat-* index pattern to see Filebeat data. By default, this will show you all of the log data over the last 15 minutes. You will see a histogram with log events, and some log messages below:
Here, you can search and browse through your logs and also customize your dashboard. At this point, though, there won’t be much in there because you are only gathering syslogs from your Elastic Stack server.
Use the left-hand panel to navigate to the Dashboard page and search for the Filebeat System dashboards. Once there, you can select the sample dashboards that come with Filebeat’s system
module.
For example, you can view detailed stats based on your syslog messages:
You can also view which users have used the sudo
command and when:
Kibana has many other features, such as graphing and filtering, so feel free to explore.
In this tutorial, you’ve learned how to install and configure the Elastic Stack to collect and analyze system logs. Remember that you can send just about any type of log or indexed data to Logstash using Beats, but the data becomes even more useful if it is parsed and structured with a Logstash filter, as this transforms the data into a consistent format that can be read easily by Elasticsearch.
]]>Feathers is a minimalistic real-time framework for web applications built over Express. With Feathers, in addition to using middleware, you can get real-time, RESTful services and ORM support out of the box.
FeathersJS offers a variety of other features that make it useful to front-end developers:
In this tutorial, you’ll build a small, real-time messaging application to demonstrate the features and help understand the basics of FeatherJS.
To execute this tutorial, you’ll need a working computer system with NodeJS and any preferred generator installed. However, to build specific projects, you may occasionally need to install more packages as your project requires.
The best way to quickly get started with Feathers is through the command-line interface tool.
From a terminal window, run the following to install Feathers globally:
- npm install -g @feathersjs/cli
Next, install the Yeoman generator for Feathers:
- npm install -g yo generate-feathers
Once installed, create a project directory called feathers-demo
:
- mkdir feathers-demo
Change into the newly created directory:
- cd feathers-demo
Create a new application called feathers-app
:
- yo feathers
Start the development server:
- npm start
This will prompt for the project configuration details like project name, description, API type, and database. After that, your project will be generated and live on localhost:3030
. If you navigate to that port on your browser, you will see your app live.
You have a brand new FeathersJS application. You’re now ready to add a service. Since you are building a demo real-time app, let’s create a message
service. Go back to the terminal and run the command below:
- yo feathers:service
This will prompt you for some answers about the service you’d like to create. Answer the prompts to proceed.
At this point, if you restart the development server and navigate to the new path on the browser, localhost:3030/message
, you will see the database displayed.
You’ll see that the database is empty, so let’s add some data. Back in your terminal, run this curl
command:
- curl 'http://localhost:3030/message/' -H 'Content-Type: application/json' --data-binary '{"text": "Hello world"}'
This will send a POST
request to the /message
endpoint. Once posted, you will see the changes reflected in the database after reloading the page.
Feathers automatically made an API so that you didn’t have to create any of the /get()
or /post()
methods. You may have also noticed that you have an automatically generated unique ID for each message.
Now you’re ready to configure the ability to allow users to post data to this database and render it on the client in real-time. To do this, open the public/index.html
file and update the code with the one below:
<html>
<head>
<title>Welcome to Feathers</title>
<link
rel="stylesheet"
href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.0/css/bootstrap.min.css"
integrity="sha384-PDle/QlgIONtM1aqA2Qemk5gPOE7wFq8+Em+G/hmo5Iq0CCmYZLv3fVRDJ4MMwEA"
crossorigin="anonymous"
/>
</head>
<body>
<main class="container">
<img
class="logo"
src="svg"
alt="Feathers Logo"
/>
</main>
<div class="card mt-5">
<div class="card-header">
Messages
</div>
<div class="card-body">
<h5 class="card-title">Send Message</h5>
<input class="form-control" type="text" placeholder="message" id="message"/>
<button onclick="sendMessage()" type="button" class="btn btn-primary mt-2">
Send Message
</button>
</div>
</div>
<script src="//cdn.rawgit.com/feathersjs/feathers-client/v1.0.0/dist/feathers.js"></script>
<script src="socket.io/socket.io.js"></script>
<script type="text/javascript">
var socket = io()
var app = feathers()
app.configure(feathers.socketio(socket))
var messages = app.service('message')
messages.on('created', function(message){
console.log('Message created on client', message)
} )
function sendMessage(){
var messageText = document.getElementById('message').value;
messages.create({text: messageText})
}
</script>
</body>
</html>
Now when you check back at our Feathers app on localhost:3030
you will get an updated user interface where users can update the database.
Here, users can type the message in the input field on the left and click the Send Message button. If you follow those steps and reload the page, you will see the message added to your database.
As a final step, let’s update the app to render the messages on the UI in real-time. Update the index.html
file again with the code below:
<html>
<head>
...
</head>
<body>
....
+ <div class="card">
+ <div class="card-body">
+ <p id="messageList" class="card-text"></p>
+ </div>
+ </div>
....
<script src="//cdn.rawgit.com/feathersjs/feathers-client/v1.0.0/dist/feathers.js"></script>
<script src="socket.io/socket.io.js"></script>
<script type="text/javascript">
....
messages.on('created', function(message){
+ var newMessage = document.getElementById("messageList");
+ newMessage.innerHTML += "<h4>" + message + "</h4>"
console.log('Message created on client', message)
} )
function sendMessage(){
...
}
</script>
</body>
</html>
With that change, your app will render messages on both the database and UI in real-time across all clients.
In this tutorial, you built a lightweight REST application that is updated in real-time with FeathersJS. You can learn more about FeathersJS at the official documentation page.
]]>