Sunday 17 April 2022

Pineapple Mark VII with 5Ghz adapter setup

About 

I will explain how to setup a compatible 5GHz adepter to work with Pineapple Mark VII. The list of compatible adepters is available here. I purchased Alfa AWUS036ACM from Amazon.

Before we begin, please refer to this article to understand which configuration files to edit and how to create backup for existing files. Below you will the complete configuration.

Configuration

Login to Pineapple via SSH and backup the original 2G configuration as per the article above:

$ ssh root@172.16.42.1

Insert the USB adapter into the Pineapple and locate it, it should be on WLAN3.

root@mk7:~# lsusb
root@mk7:~# airmon-ng
root@mk7:~# airmon-ng | grep MediaTek # AWUS036ACM

Now we need to configure Pineapple to use the new 5G interface instead of the default WLAN1 which only supports 2GHz. We will edit 3 configuration files to apply the changes as per the article above. Note, you do not need to follow the article instructions to change these files as the below configuration is all you need to update.

wireless

Edit "/etc/config/wireless" configuration file and find where "wlan1" is configured. You should only change "channel", "hwmode", "path", "ifname" as follows and leaving everything else the same:

config wifi-device 'radio1' # do not touch (must match with below)
    option channel '44'
    option hwmode '44a'
    option path 'platform/101c0000.ehci/usb1/1-1/1-1.3/1-1.3:1.0'

config wifi-iface
    option device 'radio1' # do not touch (must match with above)
    option ifname 'wlan3'

channel - can be anything of 5G channel range.
hwmode - suffix "a" for 5G.
path - needs to change as your adepter is connected to its own USB slot. WLAN1 on USB 1-1, WLAN2 on USB 1-2, thus WLAN3 is probably on USB 1-3.
ifname is the adapter.

pineap

Edit "/etc/config/pineap" configuration file. You should only change "ap_channel", "pineap_interface" as follows and leaving everything else the same:

option ap_channel '44'
option pineap_interface 'wlan3mon'

ap_channel - match to what is configured in "wireless" section above.
pinap_interface is the adapter.

pineapd

Edit "/etc/init.d/pineapd" configuration file. You should change all occurrences of "wlan1" to "wlan3" and "wlan1mon" to "wlan3mon":

uci set pineap.@config[0].pineap_interface='wlan3mon'
ifconfig wlan3mon &>/dev/null || airmon-ng start wlan3 &>/dev/null

Testing

After rebooting your Pineapple Mark VII, login to its WEB interface on http://172.16.42.1:1471

In the Recon section where the Scanning takes place, you should see that the Scan has a new 5 GHz option available! 

You may also try the following in SSH terminal (WEB Terminal provies no output, SSH to the device). Output should capture access points on channels from 36 to 48.

root@mk7:~# airmon-ng    # ensure 5G adapter is on wlan3
root@mk7:~# airmon-ng start wlan3
root@mk7:~# airodump-ng wlan3mon --band a --channel 36-48

Then notice 5.2 GHz on wlan3:
root@mk7:~# iwconfig wlan3mon
wlan3mon  IEEE 802.11  Mode:Monitor  Frequency:5.2 GHz  Tx-Power=20 dBm

Conclusion

Pineapple Mark VII only sopports 2GHz out of the box which is outdated, people are connected to 5GHz WiFi networks. There is very little help on how to setup and configure a 5GHz adapter for Pineapple. I hope this blog helps others who want to make their Pineapple to support 5GHz.

Monday 9 April 2018

Bookman HAALA add-on in Java

In this article I will build the “Bookman” add-on, which saves bookmarks to the database. I will write it in Java and then deploy onto the open source Scala platform - HAALA. The platform lives on Bitbucket and the source code for this add-on can be found in “Download” section.

Users registered with the Platform have add-ons (packs) they could use and the “Bookman” is going to be one of them. I will hook it into the Control Panel for managing and make a simple website to see users’ bookmarks.

I will go through the steps to build a complete web application and highlight its vital points. I will only paste bits of the source code, so please download “bookman-java-example-2.54.zip”, unzip and refer to it for further details. The “Bookman” add-on is the Maven project. I will not go over the project structure and its “pom.xml” as it is self-explanatory.

Content

Domain model
Services
Facades
Controllers
Directives
Feeds
Control Panel and Website
Deploy

Why HAALA?

It is fast and flexible, easy to install on a virtual machine and has the Control Panel I can reuse for add-ons management. Although the Platform backend is in Scala with XML based configuration, I want to see if it can run my annotation based Java add-on.

To keep it simple, I will reuse some of the Scala classes provided by the Platform, drop validation and hardcode UI text. I will not use advanced features such as a search engine and code just enough to get basic functionality going.

Let’s get started with the database schema and Hibernate entities.

Domain model

This chapter is about the database: SQL schema and Hibernate entities.

SQL schema

I will start by creating the database schema. “schema.sql” can be found in the resources folder.

The naming used in the SQL may be confusing but it follows the pattern: starts with the uppercased add-on name “BOOKMAN” followed by the underscore with either a table name or foreign key or whatever. “BOOKMAN_bookmarks” tells that this table belongs to the “BOOKMAN” add-on. As there could be many add-ons sharing the same database schema, it is a good idea to prefix them, this also helps to prevent name collisions with the generic tables used by the Platform.

Every add-on should have a table extending the generic “PACKS” table: mappings for users and add-ons. This table has just one column “ID” and its Hibernate entity takes care of sub-classing.

Sample for PostgreSQL database:
CREATE TABLE BOOKMAN_packs
(
  id bigint NOT NULL,
  CONSTRAINT BOOKMAN_packs_pkey PRIMARY KEY (id)
)
WITHOUT OIDS;

There is also the foreign key “BOOKMAN_pack_sub_fk” to reference this ID and the ID in the “PACKS” table.

| PACKS |           | BOOKMAN_packs |
|-------|  same as  |---------------|
| ID    |-----------| ID            |
| User  |    FK     |               |

Now, a user can be assigned to the “Bookman” add-on by inserting a mapping into the “PACKS” table. However, to save bookmarks we need to create yet another table linked to the add-on. Take a look at “BOOKMAN_bookmarks” table and note the “pack_id” column. Hibernate mappings will allow us to reach saved bookmarks collection as follows: User → Packs → lookup Bookman Pack → Bookmark entities. And vice versa: Bookmark entity → Bookman Pack → User.

That covers the database schema. The rest of “schema.sql” creates indices and sequences.

Hibernate entities

Although the Platform uses XML for Hibernate descriptors, those should mix nicely with annotations. There are two entities in the “Bookman” add-on:
  • “PackEntity” is the subclass of generic “Pack”: mappings for users and add-ons. It has the static “PACK” field to define the add-on name and the collection of bookmarks.
  • “BookmarkEntity” is a saved bookmark. It has fields mapped to the database and the link to “PackEntity”. “BookmarkDAO” helps to load and save this entity.
Generic Pack entity <--------- Bookman Pack entity
        |             extends           |
        |                               |
        |      maps to DB table         |
    | PACKS |                   | BOOKMAN_packs |
    |-------|      same as      |---------------|
    | ID    |-------------------| ID            |
    | User  |         FK        |               |

Note that Hibernate “nullable” and “unique” constraints are only used when creating a new schema from entities. Since the schema is already created with constraints, there is no need to put those into entities.

I used persistence annotations to configure the entities. Now it is time to write a service to operate on these entities.

Services

This chapter covers a simple service, which manages bookmarks.

“BookmarkService” is the Spring annotated service with auto-wired beans doing CRUD operations on bookmark entities. Here is the stub of the service with no methods implemented:
@Service
@Transactional
public class BookmarkService {

  @Autowired
  BookmarkDao bookmarkDao;

  @Autowired
  ServiceContainer sc;

  public BookmarkEntity create(long userId, String url, String description);

  @Transactional (readOnly = true)
  public BookmarkEntity read(long bookmarkId);

  public BookmarkEntity update(long bookmarkId, String url, String description);

  public void delete(long bookmarkId);

  @Transactional (readOnly = true)
  public List<bookmarkentity> findByUser(long userId);

  @Transactional (readOnly = true)
  public List<bookmarkentity> findAll();

}

It does not extend the generic “BaseService” because I want to keep it simple with no search engine nor internal caches. This functionality is included into “BaseService” along with “ServiceContainer” to access generic services such as Labels, Settings etc.

As you can see from the stub, “update” and “delete” methods do not take User ID: a user is already permissioned for these operations. The service is triggered by a facade, which has a validator set. If such a validator sees that the user is not the owner of an entity, it should not allow the request to go through.

The actual service implementation is straightforward. Please refer to “BookmarkService” for details. One thing worth mentioning is “UserMod”: it registers users with the add-on.

The service is ready and it can load and save entities to the database. But to trigger this service I need a facade.

Facades

This chapter covers a Facade for users to trigger services from.

A facade is a class exposed to end users. The Platform exposes facades as JavaScript methods for users to trigger. JavaScript calls the backend as AJAX and the Platform converts incoming JSON into Scala classes and vise versa.

JavaScript   Facade      Service
    |           |           |
    |   calls   |           |
    |---------->| validates |
    |           |-----+     |
    |           |<----+     |
    |           |   calls   |
    |           |---------->| executes
    |  result   |  result   |-----+     
    |<----------|<----------|<----+     

Here is how a facade stub looks like:
@AopFacade
public class BookmarkFacade {

  @Autowired
  @ValidatorClass
  @Qualifier("baseValidator")
  BaseValidator validator;

  @Autowired
  BookmarkService service;

  @SecuredRole("USER")
  public CallResult create(CreateBookmarkVO cmd, HttpServletRequest request);

  @SecuredRole("USER")
  @ErrorCR(on = ObjectNotFoundException.class, 
                 key = "fcall.nofnd", log = "Bookmark #{bookmarkId} not found")
  public CallResult update(UpdateBookmarkVO cmd, HttpServletRequest request);

  @SecuredRole("USER")
  public CallResult remove(LoadVO cmd, HttpServletRequest request);

  @SecuredRole("USER")
  public CallResult findMy(SearchVO cmd, HttpServletRequest request);

}

The “AopFacade” annotation is required as it applies quite a few aspects to this class. Aspects catch errors, set data into requests, perform security checks etc.

The “Validator” field requirement forced by aspects. Here I will use the default validator, which only supports generic validation. This validator does not know if a calling user owns a bookmark and has permissions to delete it, for that you need to implement your own validator, refer to the “Toyshop” add-on for details. Aspects validate non-string arguments, such as “CreateBookmarkVO”, based on field annotations defining restrictions.

The “SecuredRole” should be present on each exposed method, requirement forced by aspects. You can see the available roles in “ROLES” table and introduce your own roles.

The actual facade implementation is straightforward. Please refer to “BookmarkFacade” for details.

XML

The Platform uses DWR to expose facades and the configuration is done in XML. Here is a sample exposing “create” and “read” methods of “BookmarkFacade” as JavaScript methods:
<bean class="org.example.haala.bookman.BookmarkFacade">
  <dwr:remote javascript="BookmanBookmarkF">
    <dwr:include method="create"/>
    <dwr:include method="read"/>
  </dwr:remote>
</bean>

Although the facade itself uses annotations, I needed to write XML to expose its methods. Now I will dive into controllers and URL mappings: next post.

Controllers

This chapter is about controllers and how to map those to URLs.

The Platform invokes services via facades leaving controllers with no special action other than rendering page templates. But nothing prevents calling services from controllers.

The best part is that I do not have to write a controller. I will use the generic “DynamicController” to render a page based on URL mapping configuration.

Here is a sample of a mapping:
{ "pattern": "view/*", "handler": "dynamicCtrl",
  "handlerConf": {
    "pageName": "view",
    "options": "ftl-page",
    "facade": "BookmanBookmarkF",
    "mod": "bookmanBookmarkCtrlMod"
  }}

It maps “view.ftl” Freemarker template to a URL ending with “view”, for example “http://example.org/view/foobar”. “facade” specify facades to include into a page and “mod” is executed by a controller.

Here is a stub of a controller mod:
public class BookmarkCtrlMod extends BaseControllerMod {

  @Autowired
  BookmarkService service;

  @Override
  public scala.Option<String> process(BaseController ctrl, 
            HttpServletRequest request, HttpServletResponse response);
}

The “process” method returns Scala “None” to tell the Platform to use a page template configured in the mapping. And because a Java class cannot directly extend Scala trait, the adapter “BaseControllerMod” is used.

Please refer to “BookmarkCtrlMod” for details on how to retrieve a bookmark and then refer to FTL files for rendering details.

“BookmarkCtrlMod “ is annotation based but its bean is defined in XML so that the generic controller can find it. Now I will write Freemarker directives to use in page templates.

Directives

This chapter covers Freemarker directives and how to use those in page templates.

Please refer to Freemarker manual to learn about directives. You can write your own directive and register it with the Platform to get access to services and HTTP objects.

Here is a stub for a directive:
public class ListBookmarksDirective extends BaseDirective {

  @Autowired
  BookmarkService service;

  @Override
  public void execute(Environment env, Map params, 
                TemplateModel[] loopVars, TemplateDirectiveBody body);

  @Override
  public void execute(State state);

}

It extends “BaseDirective”, which has methods to render the output. More than that, “BaseDirective” has plenty of Freemarker utilities: output scope, wrapping objects etc. Since the backend is in Scala, there some Java to Scala conversions involved.

The actual directive implementation is very simple: print all bookmarks into a page template. Please refer to “ListBookmarksDirective” for details.

XML

To make this directive available to a page template, I need to update add-on XML.
<entry key="bookman">
  <list>
    <ref bean="bookmanListBookmarksFDir"/>
  </list>
</entry>

Using the directive

Here is a sample of FTL page using the directive:
<@bookman.listBookmarks ; bookmark >
  <div class=”bokmark”>
    <p>$(bookmark.url}</p>
    <p>$(bookmark.decription!}</p>
  </div>
</@>

Macros

Directives are very powerful and the Platform supplies quite a few of them.  But there is no need to write a directive if one can do it with a macro. Please refer to Freemarker manual to learn about macros.

How do FTL templates compare to JSP pages? FTL files are easier to read even with embedded macros and directives, in comparison to JSP with tag libs and scriplets. Another important point is that FTL could be updated with no redeployment needed: by feeding files to the Platform.

Feeds

This chapter covers updating resources such as files and settings on the fly through the automatic feed process and explains the “site” concept of the Platform.

The Platform looks for JSON or XML files in a configured directory. It then processes whatever found in there. Please look into the add-on “resources/feed” directory: settings, URL mappings, FTL templates etc. JSON descriptors such as “x-settings.json” apply the settings, “x-files.json” specify files to upload etc.

Sites

The Platform uses “site” to differentiate between website resources. It is a unique combination of lowercased characters followed by a number. This combination dictates which add-on resources belong to and language.

The “Bookman” website uses “boo” string as its site. All of its resources will be saved as “boo1” – the website in English. As I do not care about multilingual support, I am hardcoding English text in UI instead of saving it to the database. To hook into the Control Panel and use its features, some of the add-on resources need to be saved as “cp1” - the Control Panel in English.

Uploading the feeds will hook the add-on into the Control Panel and create resources for a standalone website. I will explain more about template structure.

Control Panel and Website

In this chapter I will hook the add-on into the Control Panel and create a website to view all the bookmarks.

Control Panel

Supplying FTL templates with macros will do the trick. Please refer to “pack-bookman.ftl” for details on how to render a link to the “Bookman” add-on main page. Templates uploaded as “cp1” will be used for the Control Panel in English. And to separate add-on resources from other add-ons, its page templates will be uploaded into “pages/cp/bookman” folder.

FTL structure

In the generic “DOMAINS” table I registered “bookman.example.org” with “style=modern”. The Platform then uses “modern.jsp” to render add-on pages. It requires top and bottom decorator FTL templates, with optional inserts to use custom web resources. Please refer to the FTL files in “resources/feed” folder.

|     main.jsp    |
|-----------------|
| head          <-+--- inserts (FTL or label)
| body            |
|                 |
| [  modern.jsp   |
| |---------------|
| |             <-+--- layout-top.ftl
| | page template |--- FTL or JSP      
| |             <-+--- layout-bottom.ftl
|                 |
|               <-+--- inserts (FTL or label)

Finally, it is time to see the add-on in action.

Deploy

This chapter is about deploying, configuring and running the new add-on.

As the Platform could change in the future and become incompatible with the add-on, I will use its version 2.54, which also has Vagrant support. Following the directions in “vagrant” folder , I got the virtual machine is up and running and can access “example.org” website.

These are the steps to install the Bookman add-on, which could be applied with small adjustments to any add-on:
  1. Login to the virtual machine and stop the Tomcat
  2. Copy add-on into “/usr/local/haala-project”
  3. Edit “cactus/pom.xml” and include the new module
    <module>../bookman-java-example</module>
    
  4. Edit “cactus/webapp/src/main/webapp/WEB-INF/pack-all.xml” and include the file with XML snippets to process during the build
    <!-- xml::import { file = pack-bookman.xml; node = root } -->
    
  5. Edit “cactus/webapp/pom.xml” and include the resources and add-on itself
    <resource>
      <directory>../../bookman-java-example/src/main/resources</directory>
      <targetPath>../${project.build.finalName}/WEB-INF</targetPath>
      <includes>
        <include>pack-bookman.xml</include>
      </includes>
    </resource>
    
    <dependency>
      <groupId>org.example.haala.bookman</groupId>
      <artifactId>bookman</artifactId>
      <version>${project.version}</version>
    </dependency>
    
  6. Update database with “schema.sql” found in the add-on directory.
  7. Copy “feed” folder from the add-on directory into “/usr/local/haala-cactus.fs/feed”
  8. Start the Tomcat, access the Control Panel and run the “AggignMissingPacksTask”
  9. After all the feeds were consumed, try to access: 
    • http://cpanel.example.org:8082/cp/bookman/main.htm
    • http://bookman.example.org:8080
I am now able to manage bookmarks through the Control Panel and see the list of all bookmarks in the standalone website.

Monday 13 February 2017

Microservices with Scala

I will build an application to save user’s favorites bookmarks. It will use REST microservices and have a simple web interface. This application will give you an idea of how microservices are implemented and exchange data between themselves and a user.

Each microservice should run in its own isolated environment and only deal with its business domain. It could be deployed independently of other microservices. However, for simplicity, my microservices will share the same database instance and be the part of the same code base. I will maintain the isolation: each microservice will have its own set of database tables, forbidden to others, and no shared tables. Code base will also have a package per microservice.

 

Technical details

  • Authentication is through OAuth2 token
  • Microservices communicate via HTTP with JSON
  • Backend: Scala, Akka HTTP, Akka Actors, MongoDB, SBT 
  • Frontend: AngularJS, Bootstrap, jQuery, Node JS 

 

Source

You can download the application from GitHub and give it a go.  This article will not provide a source code but may refer to the application source files. So, go ahead, download it, open the project and read on.

The GitHub page has the detailed information of what you need in order to run the application. I will not go over it again.

 

Backend

The backend is where microservices are implemented. Each microservice is running on a dedicated port as HTTP server. The backend source code shows how to use JSON with microservices, OAuth2 token and MongoDB. You can also take a look at the Specs for each microservice in the source files. Those are written with custom REST DSL, which should be concise and readable.

 

Overview

The domain model is rather simple and has three objects: Token, Profile and Bookmark. And the three microservices dealing with each of the domain objects respectively: Auth, Profiles and Bookmarks.
The above looks like the nanoservice anti-pattern, too fine-grained services whose overhead outweighs their utility. The application is just an example, but consider this: the Profiles microservice could be extended to provide additional information such as users' addresses, roles, back details, which could be pooled from third parties. As its complexity ramps up, it stops being a nanoservice.

 

Auth microservice

The starting point is the Auth microservice. It maintains users' tokens and credentials. When a user signs in, the Auth microservice provides the user with a unique token, which is then included by the user into each request as Authorization header. E.g. Authorization: Bearer 1234567890. This token will tell the microservices who the user is (authentication) and what it can do (authorization).

 

Profiles microservice

Next stop is the Profiles microservice. It holds users' accounts: username, first name, last name and email address. It talks to the Auth microservice and provides user's account data with a Profile ID (aka user's unique primary key).

 

Bookmarks microservice

Finally there is the Bookmarks microservice. It can create, read, update and delete (CRUD) users' bookmarks. It talks to the Profiles microservice to get a Profile ID. Profile ID is then used to distinguish one user from another when doing CRUD operations on users' bookmarks.

 

Microservices Structure

All my microservices follow the similar structure and workflow:
  1. A user sends a HTTP request to a microservice.
  2. The microservice accepts the request with its REST server part and authenticates the user by a token. It then calls an actor passing the following into it: request entity, user's profile, etc.
  3. The actor is doing parameters validation and user authorization. It also calls various database methods to form a response entity and replies it back to the REST server. 
  4. The REST server then sends the response back to the user.

 

OAuth2 token

Here is how a user successfully signs in:

 O
-|-                           [Auth MS]
/ \
 | POST (username, password)      |
 |------------------------------->| Find a user
 | 201 (token)                    |
 |<-------------------------------| Generate a token

A user sends its credentials to the Auth microservice in JSON format: {"username": "test", "password": "test"}. The microservice then looks up the user's details and compares the passwords. If the passwords match, a token is generated and saved for this user. The token is then sent back to the user, and it may look like "AABBCCDDEEFF".

From this point onward, the token should be included into request headers: "Authorization: Bearer AABBCCDDEEFF". Each request, the user makes, should have this header included. And the other microservices will validate the token by querying the Auth microservice. This will tell them who the user is by providing the token.

 

OAuth2 token in action

Here is another example, a user, who was previously successfully signed in, looks up its account details in the Profile microservice:
 O
-|-             [Profiles MS]           [Auth MS]
/ \
 | GET (token)       | GET (token)          |
 |------------------>|--------------------->| Find a user
 |                   | 200 (username)       |
 | 200 (account)     |<---------------------| Provide a username
 |<------------------| Find an account      |
 |                   | by a username        |

A user sends a request to the Profiles microservice. All it provides is a token acquired earlier in the request header: "Authorization: Bearer AABBCCDDEEFF". The Profiles microservice asks the Auth microservice who the user is, which in turn, looks up a username by the token from its database. Now the Profiles microservice knows the username and looks up the account details: {"profile_id": "5", "username": "test", "full_name": "Homer Simpson"}
The solution is far from perfect when a username is used to connect a token data with an account data. The Auth microservice has a token saved with a username and the Profiles microservice has an account data saved with a username, thus the additional effort is required when a user wants to change its username. But introducing a foreign key to connect those two pieces of data will just complicate things.

 

OAuth2 token and the Bookmarks microservice

The final diagram I want to show is when a signed in user tries to create a new Bookmark:
 O
-|-            [Bookmarks MS]      [Profiles MS]       [Auth MS]
/ \
 | GET (token, data) | GET (token)       | GET (token)     |
 |------------------>|------------------>|---------------->| Find
 |                   |                   | 200 (username)  |
 |                   | 200 (account)     |<----------------| Provide
 | 201 (bookmark)    |<------------------| Find an account |
 |<------------------| Create a bookmark |                 |
 |                   | with a Profile ID |                 |

A savvy reader may ask: why the Bookmarks microservice does not validate a token. It does not it query the Auth microservice because it relies on the Profiles microservice to do the validation. As the Bookmarks microservice needs a Profile ID so save bookmarks, it simply requests a Profile object from the Profiles microservice, which in turn will validate a token and provide an account data. In case if the token is invalid or not present, the Auth microservice will return 401 status code, which will be propagated to the Profiles microservice and then to the Bookmarks microservice, the user will get 401 status code in a response.
As a Profile ID is the cornerstone of CRUD operations on users' data and the Profile object is needed by all microservices, I created an authentication chain so that only the Profiles microservice validates tokens. In other words, when the Bookmarks microservice is requested, it will ask the Profiles microservice for a Profile object, which in turn will do the token validation. This saved us redundant HTTP calls and the latency involved.
As the above diagram pictures, a user sends a request to the Bookmarks microservice with the token in the request headers. The request entity looks like: {"url": "http://example.org", "rating": "7"}. The bookmarks microservice queries the Profiles microservice for a user's Profile object:  {"profile_id": "5", "username": "test", "full_name": "Homer Simpson"}. The resulting Bookmark object is then saved into the database. As I am using MongoDB, the Bookmark object looks like: {"id": "12", "profile_id": "5", "url": "http://example.org", "rating": "7"}

 

Room for Improvements

I omitted some of the vital token's actions: invalidating (sign out) and refreshing. Also there is no way to add a new user, new profile or modify an existing one. If you have followed the instructions on the GitHub page then all the test data should already be in your database.

Furthermore, inner-microservice communication is done with a user's token. There could be cases when a microservice A needs to invoke admin only URL of another microservice B. One solution could be to introduce a "system token" which both microservices know, and when a request comes with this special token, it can be trusted.

The OAuth2 token validation, which is a part of the Bookmarks and Profiles REST servers, should be done a bit differently. But in this example it is implemented as simple as possible.

And finally, my custom REST HTTP client, plus the DSL I am using in Specs, is based on Apache HTTP client because of its simplicity. I could not make Akka HTTP client to work properly.

 

Frontend

The frontend role is twofold: serve web content to browsers and act as a proxy to access the microservices.
Why proxy? As the savvy user, you may recall that microservices run on different ports and could reside on different hosts. But UI needs to talk to all of them. If you want to mess with CORS headers to bypass browser's restrictions as I did in the first place then be my guest. Fortunately, the proxy solution is very easy to implement thanks to this article.
My frontend is a proxy weblet:
  • Code which could reside anywhere
  • Salable horizontally
  • Abstracts microservices' hostnames and ports
  • Easier to write JavaScript to talk to the backend

 

UI Structure

The UI follows a default Angular application structure. I mixed in a Bootstrap theme with couple of plugins to make it visually appealing.

 

Authentication

First thing a user should do is to authenticate itself by providing credentials. The service acquires a token and stores it in a cookie. The token is then included in every HTTP request the user makes.

 

Managing Bookmarks

In the frontend, the Bookmark object exists as a JavaScript class to provide auxiliary functions: fetch a user owning the bookmark or to refresh itself. The service operating on bookmarks converts JSON to a proper class. The controller however, has its own JavaScript classes to create a new bookmark and to edit an existing one. The latter wraps a Bookmark class and performs actions on it.
It may sound like over-engineering. For this tiny example it may be true, but when an application grows larger, working with classes instead of plain JSON is a must have: auxiliary functions inside a class, encapsulation of properties etc.

 

Conclusion

In this article I tried to explain how to implement microservices and make them work with web based UI. The application I built has only rudimentary functions, just enough for a reader to grasp the technologies. I also built my application so it can be horizontally scalable, both the frontend and the backend.

Sunday 9 June 2013

Raspberry Pi case: Tin can

Here is the case I made from a coconut milk tin can. The case is big enough to fit Raspberry Pi with Pi-Face attached. I wanted to paint it as a Duracell battery but then decided not to use this case, as it may Do a Barrel Roll!




Friday 7 June 2013

Connecting temperature and motion sensors to Pi-Face

I will assemble the same circuit as in my previous post but this time I will use Pi-Face. The parts are the same except for one minor change: I will use 10k Ohm pull-up resistors for both of the sensors. So grab an additional 10k Ohm resistor and lets wire things up.

Because Pi-Face cannot read DS18B20 temperature sensor's input, I will connect the sensor's data terminal pin directly to Raspberry Pi GPIO, while having everything else connected to Pi-Face. Pi-Face connects to Raspberry Pi through SPI which leaves me with plenty of free GPIO pins. Here is the link to Pi-Face design and its beakout board with the connection pins marked green.

Here is the circuit

 


The big red square are Raspberry Pi (Model B) pins. The pins differ on Model A. Two smaller red squares are sensors - temperature and motion. Four tiny red squares are resistors. Plus two diodes (LEDs). Green lines are wires and green dots are wire junctions. There are also 8 + 8 pins on Pi-Face, marked as Input and Output, the Input set of pins has one ground pin and the Output set of pins has one 5V pin. And finally, green squares on Raspberry Pi pins connect it to Pi-Face.

Connecting Pi-Face to Raspberry Pi


I used a bunch of jumper wires to connect only the required pins marked as green squares on the circuit. So, every male pin on Raspberry Pi marked with a green square should be connected to its female slot on Pi-Face. Triple check that this connection is proper, not upside down, not mirrored. Then you may plug power supply and run this Python program which will enable Pi-Face output pins so you can see its on-board LEDs flash.

Wiring sensors and LEDs


Please read my previous post to understand how to calculate resistor values and wire sensors into the circuit. Note that I am still using 100 Ohm resistors for LEDs because the current does not exceed 20 mAmps (measured 18 mAmps) and I've got plenty of those cheap LEDs! Also note that the voltage provided through Pi-Face Output pins (which act as ground) is not 5V but 4.2V (measured), I guess this is because every output pin has an on-board LED which provides additional resistance.

Summary:
  • Use 5V pin of Pi-Face to power both sensors and both LEDs.
  • Connect the temperature sensor's data terminal to Raspberry Pi GPIO4.
  • Connect the motion sensor data terminal to Pi-Face input pin 0 (the first pin as input pins count starts from 0).
  • Connect ground terminal of the first LED to Pi-Face output pin 2 (4th pin, first pin is 5V and output pins count also starts from 0) and ground terminal of the second LED right next to it - output pin 3.

Python program


It checks room temperature every half a second and lights up one LED when the temperature is above 20 C (just breath on the sensor). Lights up another LED when it detects motion.  You may want to alter the temperature limit ans change the device directory (mine is 28-0000047b16ad) in the source code. Download link.

And finally, a set of pretty pictures



Saturday 1 June 2013

Connecting temperature and motion sensors to Raspberry Pi

I will connect a temperature sensor and a motion sensor directly to Raspberry Pi (Model B), both sensors are digital and one-wire. I will also use two LEDs as signals. When the temperature raises or motion detected those will flash.

Below are the links to the actual parts I used. Eventually these links may stop working, but all the parts are very common and easily obtainable.
  1. Yellow LED
  2. Red LED
  3. Breadboard
  4. Jumper wires M/M
  5. Jumper wires M/F 
  6. Two 100 Ohm resistors
  7. One 4.7k Ohm resistor
  8. One 10k Ohm resistor
  9. DS18B20 digital temperature sensor
  10. SparkFun SE-10 PIR motion sensor

Here is the circuit



The big red square are Raspberry Pi (Model B) pins. The pins differ on Model A. Two smaller red squares are sensors - temperature and motion. Four tiny red squares are resistors. Plus two diodes (LEDs). Green lines are wires and green dots are wire junctions. Wire properly and it should work.

Calculating LED resistor: why 100 Ohm?


Ohm's Law:   I = V / R    →   R = V / I
  • I is the current measured in amperes, V is the voltage measured in volts, R is the resistance measured in ohms.
The board supplies 3.3V and a LED maximum current is 20 mAmps (according to manufacturer). Thus the safe current for the LED is around 12 mAmps.

R = 3.3 / 0.012 = 275 Ohm

Oh, the LED itself acts as a resistor of approximately 175 Ohm (measured manually), thus:
R = 275 - 175 = 100 Ohm

So, if we put a 100 Ohm resistor in a circuit with one LED, the current in the entire circuit will be 12 mAmps.

Pull-up resistor


A pull-up resistor connects sensor's power terminal with data terminal. Read more about pull-up resistors here. One sensor in the circuit is using 4.7k Ohm and another is using 10k Ohm pull-ups.

Wiring sensors and LEDs


Read manufacturer's data sheet to understand which of the sensor's terminals connects to power, data and ground. E.g. on my motion sensor the white wire was ground and the black wire was alarm (data).

To wire and enable DS18B20 temperature sensor please use this guide. Follow the guide and the sensor will be connected to the GPIO4 pin exactly as on my circuit. Here is the data sheet.

Motion sensor connects in the similar manner to GPIO14 and uses 5V. It will send data directly to the GPIO14 pin. Here is the data sheet.

Wiring a LED is pretty simple but you may use this guide, it has pictures!

Python program


It checks room temperature every half a second and lights up one LED when the temperature is above 20 C (just breath on the sensor). Lights up another LED when it detects motion.  You may want to alter the temperature limit ans change the device directory (mine is 28-0000047b16ad) in the source code. Download link.

 And finally, a pretty picture

 


In my next post I will assemble the same circuit using Pi-Face.