Login if you already have an account



Create a new account. join us!



Need some help? Check out our forum.


Enterprise In The Cloud

The Enterprise and Cloud

Today more than ever, large enterprises are adopting information technology services in the cloud. It provides operational and financial efficiencies that are simply too compelling to ignore, and allows IT departments to focus on their core competency.

So the questions is “Why has it taken large enterprises so long to adopt cloud environments?” The first thing usually comes to mind is Security. Nowadays, technology when implemented correctly provides the right security measures (i.e. SSL, VPN) to establish connections that are on the same level of security as any behind-the-firewall environment.

Wait out the storm

Conservative CIOs are simply waiting out the storm to allow others to make mistakes before they do. Integration plays a big role as well when it comes to linking cloud applications to other apps both in front of and behind the enterprise firewall. What seems to be the issue then?

Simply, CIOs are terrified of the idea that a cloud computing company holds their company’s entire data off-site, where the unimaginable could happen… it could simply vanish tomorrow. Service interruptions, data mismanagement, and bankruptcy are some of the common issues that usually come to mind.

Some Numbers

It was found that 65% of enterprise clients surveyed are using, or are planning to deploy using the private cloud model. Despite efforts of cloud computing vendors, and increased adoption in recent years, Security is mentioned as being very inhibiting by 70% of respondents, with Privacy at 69%, Uptime at 62% and Data Control at 61%. These seem to remain the most likely inhibitors to adopting cloud computing services and applications.

Enter FatFractal:

More choices
Our solutions include public cloud, enterprise on premises, OEM integration, white label and private label.
Deploy wherever you want - from our public cloud to your public or private cloud. Switch any time you want to.
Create cooler applications faster

Our Client SDKs provide lightweight, non-proprietary easy methods that accelerate the creation of mobile and web apps using your APIs so your development efforts can focus on creating great user experiences.

Speaks your language

Our Polyglot PaaS capability supports multiple programming languages to support both new and legacy applications.

Public cloud

The easiest way to get your ideas to market in record time for a lot less money.

Private cloud

Take full control of your environment whenever you want. Run it wherever you want.

Pick the bits you want

With a wide range of technology distributions, you are free to choose the features that you want to complement your offerings and deploy them wherever you want from public cloud, private cloud or on traditional virtualized environments.

With a wide range of technology distributions, you are free to choose the features that you want to complement your offerings and deploy them wherever you want from public cloud, private cloud or on traditional virtualized environments.

Leverage corporate data

Seamlessly integrate corporate data and application data with our Datagraph and Virtual Collections.

Integration support

We help accelerate time to market by providing support for integration with your offerings as well as providing training for your technical, marketing and sales staff.

Get the most out of your mobility infrastructure

The solutions that you create with FatFractal all play nice with your investments in mobility infrastructure like MDM, API gateways and analytics.

Easy integration, high differentiation

Add differentiation to your current mobility technology offerings with line of business facing solutions that can be easily integrated with yours. And we are here to help accelerate time to market by providing integration, training and sales support.

Deploy wherever you want to

Our Cloud-in-a-Box distributions are enterprise ready and can be deployed on public cloud, private cloud or behind the firewall on traditional virtualized environments.

Create beautiful APIs instantly

Our NoServer Datagraph with Virtual Collections automatically generates secure, scalable, full-featured REST APIs and can integrate corporate data and application data seamlessly.
Virtual Collections

Virtual Collections seamlessly integrates application data with corporate data to the device.

A guide for migrating from StackMob to another provider

There have been a number of developments in the Backend as a Service space over the last 12 months, including the acquisition of Parse by Facebook and of StackMob by PayPal with the subsequent announcement of the shuttering of their service on May 11.

If you have an application deployed or in development on StackMob and are looking for a provider replacement, there are a few things that you should consider:

  1. Client-side libraries (SDKs).
  2. Changes to your client code that are required.
  3. Custom code.
  4. Webapp assets.
  5. Migration of your data.

Open Source Baas Test Suite

If you want a good way to compare providers based on a real test suite, Cory Wyles has written an excellent open source test suite the you can use to compare different Baas providers capabilities – on GitHub here. Or, you can read the report summary here.

Client-side libraries (SDKs)

You will obviously need to replace your client-side libraries with the ones provided by your new provider. A few factors to consider that directly relates to the portability of your app.

  1. Support for native classes versus proprietary object classes or protocols
  2. SDK Size and External dependencies.
Native versus proprietary models

Depending upon which devices your current project is targeting, the StackMob implementation will use a variety of object model approaches (iOS – NSDictionary or NSManagedObject, Android – StackMobModel, HTLM5/JS - StackMob.Model which is built upon Backbone.js’s Model) which you will need to replace with the ones specified by your new provider. Most providers require the use of their proprietary object classes or a proprietary protocol. Support for Core Data varies from none to good.

FatFractal is the only provider that support pure, native classes on the client side and is consistent across all devices. Migrating to FatFractal involves removing these proprietary artifacts and replacing them with device native models (iOS – NSObject, NSManagedObject or NSDictionary, Android – Java Object, HTLM5/JS – Javascript Object). This will actually simplify and reduce the size of your application as well.

SDK Size and external dependencies

You will also notice that the size of the SDKs from the various providers range widely in size. The StackMob iOS SDK, for example is 4.5MB in size with 5 external dependencies and others are as large as 23MB with as many as 11 external dependencies. StackMob’s JS SDK requires BackBone.js as a dependency.

FatFractal provides the same or better functionality in our SDKs and is the smallest by far (the iOS SDK, for example is less than 3MB) and with no external dependencies for iOS or JS, with a single dependency on Jackson libraries for JSON serialization/deserialization for Android.  Of course, our JS SDK works perfectly with BackBone.js as well Angular.js or any other as you prefer.

Client code changes

The other changes to your client code involve how you initialize the SDK for your client as well as specific method signatures for interacting with your data. These changes are fairly minor with one caveat – and that is queries. FatFractal provides far more powerful query capabilities which can reduce the number of round-trips that are required to get to the data that your client needs.

Custom code

Stackmob’s custom code provides custom API endpoints for special functions that you require and includes external http request to make external API calls as well as control over how you set the response. Your custom code must be written in Java, Scala or Clojure. Most providers provide a similar capabilities albeit most use Javascript and not Java, but you should check to make sure that the functionality you need is available from the provider.

A key capability you will likely need is access to an http client in order to access external API data.

In addition to custom API endpoints (which we call server extensions, read about them here), access to an http client and email support, FatFractal also provides event based custom code (we call this an Event Handler, read more about them here) which allows for business logic to be included in your application backend. Kinvey now provides a similar capability.

Web app assets and CORS support

If you need to serve up a web version of your application or just an administrative page for your app, then you will want to make sure that your provider provides support for web asset hosting as StackMob does.  Most providers, most notably Kinvey do not offer this functionality!

FatFractal has, from the beginning provided full, easy support for serving up your web assets including CORS support as well as URL mapping.

Data Migration

There are two ways to deal with migrating your data from StackMob to another provider.

  1. Export from StackMob and import to your new provider.
  2. Access the StackMob API directly and move all data via the current API.

There is no right answer to this question, and data migration is always a bit tricky. You should make sure that you can test out the reliability of data migration from your StackMob backend to a new provider to make sure that you will get the results that you want.

FatFractal is happy to help you with either approach – no charge – to make sure that your data can be made available from your new backend easily. Just contact us at Help with StackMob Migration.

A final note – and this pertains to how much control you have over your environment. This post has primarily talked about running your backend on various providers public cloud offering. FatFractal also offers you the ability to run the entire cloud fabric yourself wherever you want – Amazon, Rackspace and even your own servers. We call that Cloud-in-a-Box and you can learn more about that here.

We know that you have many options for where to run your apps. We at FatFractal think we offer all the relevant functionality StackMob does – from core data support to custom code – but also have a number of additional functionality and performance benefits too. On top of that, our developer support always gets rave reviews. So, we’d be delighted if you came to us!

CIB Lite

This blog provides a brief overview of the FatFractal Cloud-In-A-Box Lite (CIB Lite) evaluation, why you would want to kick the tires on it, some instructions on getting started with it, and where we are going with it.

What the heck is this thing?

CIB Lite is a diluted snapshot of the FatFractal Cloud Platform (FCP) that has been packaged into an Open Virtualization Formatted (OVF) image. The image includes a configured operating system (Ubuntu 12.04 LTS), Linux Containers (LXC), FCP, and all the necessary services (i.e., ElasticSearch). The OVF image was generated using VMware WorkStation 10.0.1 and has been packaged as a turnkey environment. We wanted to make the evaluation process as simple as importing the OVF image and powering on the instance and it doesn’t get any easier than that. Another requirement we had was that the evaluator be able to exercise the process from their desktop using a virtualization client. There are several free and pay for virtualization clients available for all the common environments (i.e., Mac, Windows, Linux, etc.), however, we recommend VMware Fusion, Player, or WorkStation as we have tested with them.

From a high level the CIB Lite looks like this -

The FCP has been diluted down to include components such that the following functionality can exercised –

  • Register users
  • Create domain and contexts
  • Deploy fully functional NoServer, Ruby, or Servlet applications
  • View or fetch logs
  • Upload blobs
  • Run the FCP in a fully functional and secure environment

The FCP Dashboard will be included at a later date which will allow the administrator to manage the environment (i.e., LXC, Services, FCP components, and etc.) from the browser.

Why the heck would you want to kick the tires on it?

The CIB Lite was purposely designed to deliver limited functionality in the easiest manner possible to give the evaluator a quick and painless way to kick the tires and decide if they would want to evaluate the CIB Enterprise which is more complicated and has dependencies on infrastructure services.  

Why should the evaluator have to provision a machine (virtual or native) or machines, install something like OpenStack and configure it to simply exercise some of the FCP’s BaaS and PaaS functionality? We figure if the experience with the CIB Lite is a pleasant one and the basic functionality is what the evaluator  is looking for then they will be more compelled to make the investment necessary to evaluate the CIB Enterprise.

How the heck do I get started with it?

1. Download the CIB Lite from here.

2. X-tar and gunzip ovf.tar.gz.

3. chmod -R 744 ovf (not necessary on Windows)

4. Import the ovf into your virtualization client.

5. Power on your VM (created by the import).

Once your VM powers up you will see the CIB Lite splash (next to the Ubuntu login) which will point you to instructions (accessible via the browser) and what your CIB Lite IP address is.

6. On your desktop edit your /etc/hosts (windows C:WindowsSystem32driversetchosts) and add the following entries.

your_cib_lite_ip        acme.ffcib.com

your_cib_lite_ip        system.ffcib.com

We are going to rely on host name resolution via the hosts file.

7. Let’s test the CIB Lite with an existing application.

Point the browser on your desktop to http://acme.ffcib.com/hoodyoodoo/index.html.

If the hoodyoodoo application appears in your browser things are going well.

8. Let’s deploy an application.

8.1 If you don’t have the FatFractal development runtime please download it from here and add the FatFractal_Runtime/bin directory to your path. To test your runtime installation enter ‘ffef’ from the shell command line and you should see the ‘ffef’ options. You will need to create the application that you want to deploy, see the getting started docs for NoServer, Ruby or Java/Servlets apps.

If you do have an existing FatFractal development runtime, rename or remove the .pem file that is located in the conf/ directory.

8.2 In the shell that you will be using to deploy your application add the following variable to your environment.

export FF_FABRIC_DOMAIN ffcib.com

or on windows

set FF_FABRIC_DOMAIN=ffcib.com

8.3 Register an account and create the application domain and context.

Point your browser to http://system.ffcib.com/console/application.html and register. Once you have registered you will be put into a workflow that will allow you to create a domain and context.

8.4 Add the entry below into your hosts file.

your_cib_lite_ip    your_application_domain.ffcib.com

8.5 Enter the command below to deploy your application to the CIB Lite.

ffef deployFFFabric

NOTE: You need to be in the directory where you scaffolded your application.

8.6 Test your application deployment.

Point your desktop application to http://your_application_domain.ffcib.com/your_application_context/index.html

You should see your application appear in the browser.

Where the heck are we going with this thing?

The FatFractal CIB targets specifically Enterprise private or hybrid cloud charters that need BaaS and PaaS functionality. The FatFractal CIB is a public offering battle tested solution that developers love and gives the Enterprise an off the shelf alternative so they don’t have to build the functionality themselves. FatFractal is constantly adding BaaS and PaaS features to their public offering and rolling them back into the FatFractal CIB. Enterprises can only benefit from the existing foundation of the public offering, constant improvements, and the ability to extend the platform for their own special needs.

From a high level the CIB Enterprise looks like this -

Have fun and let us know what you think of the CIB Lite!

- The FatFractal Team -



FYI – user management – Part I

We get asked a lot about how to customize the user definition (FFUser) with FatFractal. This post describes two methods to add some custom information about the user to the system. Either is fine, and there are some important distinctions regarding access control that you may want to consider. I will be adding more user management use cases in future blog posts. For now, let’s start with the basics and that is subclassing FFUser as well as creating a reference object with the required additional information as an alternative approach.

As usual – I have included source code and a sample application (actually three of them) to further illustrate. For this post, the code is written as test cases and include iOS, Android and HTML5/JS versions.

The source code for the sample applications is here.

A working sample test app is here

Update: Feb 14
Introduced FFUserProtocol – no longer need to subclass FFUser if introducing a custom ‘user’ class – simply have it implement the FFUserProtocol. FFUserProtocol is defined in FFUser.h as follows

@protocol FFUserProtocol
@property (strong, nonatomic) NSString          *guid;
@property (strong, nonatomic) NSString          *userName;

All methods which previously took FFUser as parameters will now accept any class which implements the FFUserProtocol

Method 1: MyFFUser as a subclass of FFUser

The first method is to subclass the FFUser class that is included in all the SDKs. The example here adds three parameters to the definition – including a nickname (String), location (FFGeoLocation) and profilePic (byte[]). The FFUser class can be easily extended to include whatever you want, see the examples below:
[tabs_framed] [tab title="Android"]

public class MyFFUser  extends FFUser {
private String m_nickname;
private FFGeoLocation m_home;
private byte[] m_profilePic;

public String getNickname() { return m_nickname; }
public FFGeoLocation getHome() { return m_home; }
public byte[] getProfilePic() { return m_profilePic; }

public void setNickname(String nickname) { m_nickname = nickname; }
public void setHome(FFGeoLocation home) { m_home = home; }
public void setProfilePic(byte[] profilePic) { m_profilePic = profilePic; }
// Then just make sure and let the SDK know you want to use this class instead of FFUser
FFObjectMapper.registerClassNameForClazz(MyFFUser.class.getName(), "FFUser");

You can see the full source for subclassing FFUser for Android on GitHub here
[/tab] [tab title="iOS"]
@interface MyFFUser : FFUser

@property (strong, nonatomic) NSData *profilePic;
@property (strong, nonatomic) FFGeoLocation *home;
@property (strong, nonatomic) NSString *nickname;

// Then just make sure and let the SDK know you want to use this class instead of FFUser
[ff registerClass:[MyFFUser class] forClazz:@"FFUser"];

You can see the full source for subclassing FFUser for iOS on GitHub here[/tab] [tab title="HTML5/JS"]
function MyFFUser() {
this.clazz = "MyFFUser";
this.FFUser = FFUser;
this.userName = null;
this.firstName = null;
this.lastName = null;
this.email = null;
this.active = null;
this.profilePic = null;
this.home = new FFGeoLocation();
MyFFUser.prototype = new FFUser;
You can see the full source for subclassing FFUser for HTML5/JS on GitHub here
[/tab] [/tabs_framed]

FFDL definition for FFUser to use MyFFUser additional parameters

CREATE OBJECTTYPE FFUser (userName STRING, firstName STRING, lastName STRING, email STRING, active BOOLEAN, authDomain STRING, scriptAuthService STRING, groups GRABBAG /FFUserGroup, notif_ids GRABBAG /FFNotificationID, profilePic BYTEARRAY, nickname STRING, home GEOLOCATION)
The FFDL definition for FFUser source on GitHub can be found here

Test cases for registering a MyFFUser user

For brevity, I will not include the code for the test cases that will verify that registering a user works properly using the subclass of FFUser. Instead, I will include the links below:
Android test case for registering a MyFFUser user
iOS test case for registering a MyFFUser user
HTML5/JS test case for registering a MyFFUser user

Method 2: PublicProfile class with a REFERENCE to FFUser(MyFFUser)

The second method is to add the additional information to a new Objecttype (my example is called PublicProfile) that contains the same additional information, but also includes a REFERENCE to FFUser. This allows for managing access control for some user information independent of the FFUser which may be useful in some cases. Note – for this exercise, the FFUser still has the expanded parameters, but the sample code only populates the standard info for a user. They point is that you can easily separate what is private and what is more “public”.
[tabs_framed] [tab title="Android"]

public class PublicProfile {
private MyFFUser m_user;
private byte[] m_profilePic;
private String m_nickname;
private FFGeoLocation m_home;

public MyFFUser getUser() { return m_user; }
public String getNickname() { return m_nickname; }
public FFGeoLocation getHome() { return m_home; }
public byte[] getProfilePic() { return m_profilePic; }

public void setUser(MyFFUser user) { m_user = user; }
public void setNickname(String nickname) { m_nickname = nickname; }
public void setHome(FFGeoLocation home) { m_home = home; }
public void setProfilePic(byte[] profilePic) { m_profilePic = profilePic; }

You can see the full source for the PublicProfile class for Android on GitHub PublicProfile class with REFERENCE for Android
[/tab] [tab title="iOS"]
@interface PublicProfile : NSObject

@property (strong, nonatomic) MyFFUser *user;
@property (strong, nonatomic) NSData *profilePic;
@property (strong, nonatomic) NSString *nickname;
@property (strong, nonatomic) FFGeoLocation *home;


You can see the full source for the PublicProfile class for iOS on GitHub PublicProfile class with REFERENCE for iOS
[/tab] [tab title="HTML5/JS"]
function PublicProfile(obj) {
if(obj) {
this.user = new MyFFUser(obj.user);
this.profilePic = obj.profilePic;
this.nickname = obj.nickname;
this.home = new FFGeoLocation(obj.home);
} else {
this.user = new MyFFUser();
this.profilePic = null;
this.nickname = null;
this.home = new FFGeoLocation();
PublicProfile class with REFERENCE for HTML5/JS
[/tab] [/tabs_framed]

FFDL definition for PublicProfile with REFERENCE to MyFFUser(FFUser)

#Objecttype definition
# Permission setting
#PERMIT read:system.admins write:system.admins ON /FFUser
#PERMIT read:loggedin write:system.admins ON /PublicProfile
You can find the FFDL definition for PublicProfile on GitHub here

Test cases for registering a FFUser(MyFFUser) user and creating a PublicProfile

For brevity, I will not include the code for the test cases that will verify that registering a user and creating a profile object works properly. Instead, I will include the links below:
Android test case for registering a FFUser(MyFFUser) user and creating a PublicProfile as well
iOS test case for registering a FFUser(MyFFUser) user and creating a PublicProfile as well
HTML5/JS test case for registering a FFUser(MyFFUser) user and creating a PublicProfile as well

Hope that you find this useful…


TechEmpower and Servlets

This blog article is Part One of a two part set of articles that provides an overview of our Servlet module and its performance as compared to a conventional Servlet container, Tomcat. This article will focus on the design differences between the containers and then provide an overview of the TechEmpower test suite which will be used to compare the performance. The TechEmpower folks have done a bang up job of benchmarking different frameworks and in the process have created a standard test suite that is comprehensive and exercises functionality that you would typically find in a production environment. Rather than reinvent the wheel I have decided to leverage the TechEmpower test suite with a couple minor changes. Part Two (which will be available in a couple weeks) of this set of articles will focus on the results and hopefully :-) provides some insight that explains them.

FatFractal (FF) Application Container aka Engine

Before jumping directly into the FF Servlet module I thought it appropriate to provide some background on the FF Application Container or as I commonly refer to it, the engine. The engine is really just an NIO server that has support for pluggable protocol handlers (and modules which I’ll cover next). The basic operations of the engine are very straightforward which drill down to read (and write) data as quickly as possible and chunk it to the protocol handler. The FF HTTP protocol handler is event based (think node.js) and will continue consuming data until it detects that it has received a full HTTP request, at which point it publishes it to subscribers.

Once the protocol handler detects that it has received a complete HTTP request, it is published  and one of the subscribers is a module delegator. The module delegator is responsible for delegating the request to the module that is managing the application. A module is essentially a software stack (i.e., Servlet) that is responsible for executing the application. So there is a complete decoupling of the network I/O and application execution. A given engine can host multiple modules (i.e., NoServer, Ruby, Servlet, etc.) and a module can host multiple applications. For PaaS applications (i.e., Ruby, Servlets) an engine will typically host one module which will host one application and the engine will run within an LXC container for security reasons. Currently modules are run in the same JVM as the engine, however, that may change in the future so that a single NIO engine can publish requests to multiple modules that reside in their own LXC containers on the same VM or on other VMs.

FatFractal (FF) Servlet Module vs Conventional Servlet Container

The FF Servlet module is a very lightweight Servlet Container that supports the typical things you would expect such as JSPs, listeners, filters, and etc. It was purposely not designed to be a full blown Servlet container like Tomcat and is available for developers that want to use the framework to implement their server side functionality. The biggest difference between conventional Servlet containers and the FF Servlet module is that the network I/O has been decoupled from the framework. A typical Servlet application can get access to the socket streams through the HttpServletRequest and HttpServletResponse objects. The FF Servlet module provides access to those streams but the streams are implemented as encapsulations around buffers. This decoupling of I/O from the frameworks is what allows a single engine to support a truly polyglot environment and will allow FF to extend its language/framework support using a single software stack.


As previously mentioned the TechEmpower folks have constructed a test suite that consists of tests that exercise different aspects of the frameworks. This article will employ the following three tests:

  1. JSON serialization
  2. Database access (single query)
  3. Database access (multiple query)

JSON serialization

In this test, each HTTP response is a JSON serialization of a freshly-instantiated object, resulting in {“message” : “Hello, World!”}.

Database access (single query)

How many requests can be handled per second if each request is fetching a random record from a data store?

Database access (multiple query)

The following tests are all run at 256 concurrency and vary the number of database queries per request. The tests are 1, 5, 10, 15, and 20 queries per request.

This article will use the same client (WeigHTTP) that TechEmpower used and the same EC2 configuration. TechEmpower typically tests on both EC2 and on dedicated hardware, unfortunately :-( I don’t have the latter and will only perform the tests on EC2.

While this article will only be comparing the FF Servlet module and Tomcat, the results can also be compared to the TechEmpower framework results since the same tests, client, and EC2 configuration are being duplicated.

Okay over and out and see you soon with the results.

Big Data, Hubi, and Beer

Okay this blog has nothing to with beer. But hey! I had to get you here somehow.

This blog article focuses on how FatFractal (FF) uses big data and why it is important to the developer. There are lots of ways to collect this kind of data and extrapolate meaning from it. At FF we designed analytics into the platform from day one so that we could generate, store, and mine data using common big data technologies such as Hadoop, MapReduce, Hive, Pig, Flume, and Cassandra. The data that is ultimately stored comes from conventional sources such as logs, infrastructure services such as CloudWatch but also from our instrumented application container which provides a real time view into what is really happening with the applications. Our goal is to ultimately provide developers with the tools and information they need to effectively manage and monitor their application’s compute usage.

At FatFractal (FF) we use big data primarily for:

  • Billing - FF charges developers for their compute consumption and all usage metrics are ultimately stored into the FF Hadoop cluster and at the end of each respective developer’s monthly billing cycle the data is MapReduced into billing records that are stored into Cassandra.
  • Usage Profile - All applications that are deployed to FF have Usage Profiles (UP) constructed for them. The UP represents a set of compute constraints based upon  either a subscription (BaaS) or a number of FatFractal Virtual Spaces (FFVS, an FFVS is a custom LXC container) and services (i.e., database) (PaaS). If the application compute usage consistently approaches or exceeds the thresholds of the UP the developer is notified so that they have an opportunity to upgrade their subscription or allocate additional FFVSs.
  • Application Analytics Service - FF provides analytic reporting for all applications that can be accessed from the FF console. This allows the developer to track their application’s compute usage. Ultimately the goal of this service is to provide the developer with the information and tools to truly monitor and manage their application’s compute usage.

This blog article will focus primarily on the UP and analytics in the context of provisioning properly for an application. In addition it will cover scheduled scaling based on a real world application (Hubi) that is currently deployed on the FF infrastructure.

Application Compute Usage Metrics

This section is included to provide the reader with background information on why and how FF collects application computer usage metrics.

Planning and scaling in multi-tenanted environments is challenging because you don’t know what applications are actually consuming the resources unless you have baked in the necessary instrumentation. When an instance hits say 80% CPU utilization, the simplest thing to do is clone all the applications onto a newly minted instance and then add it to the load balanced mix (which is what the FF traffic directors do). However, if you can identify the pertinent application(s) you simply need to clone that/those application(s) onto existing under utilized instances or spin up a new one (matching the compute needs i.e., EC2 m1.small) and let the FF traffic directors do their job. The type of data you would need to assess each respective application’s compute usage are things like; 1) CPU milliseconds consumed per time, 2) request and response counts/sizes per time, 3) memory consumption (this one is kind of hazy but a relative number can be arrived at) per time, and 4) etc. You then compare the numbers and zero in on which applications are consuming the most compute. It may well be a situation where the instance is oversubscribed and the applications need to be segmented on to different instances. At FF we collect instance level compute usage from the infrastructure services (i.e., CloudWatch) which tells us what is going on with the instance. For application level compute usage we rely on metrics that are generated by the FF application container. FF uses a custom application container (think Google App Engine) to facilitate the deployment and execution of all applications independent of their type (i.e., NoServer, R-o-R, Servlets, etc.). The FF application container has been instrumented to generate compute usage metrics in real time and ultimately propagates them to a Hadoop cluster. It should also be mentioned that the application containers reside in customized paravirtualized containers (LXC/FFVS) that are each assigned a slice (i.e, 1 proc) of the instance’s compute resources. The diagram below provides a high level view of the application container.

Usage Profile (UP)

Ok we now know how application compute usage metrics are generated, next lets look at how those analytics can be leveraged.

When an application is deployed to the FF infrastructure, nothing is known about it compute usage requirements. The UP dictates the compute thresholds which may be an indicator (i.e., the developer signs up for a bronze subscription knowing the associated compute quotas match the application’s compute usage closely) however most green field applications are undersubscribed with significant headroom to grow.

Compute provisioning for BaaS and PaaS applications is defined differently. BaaS developers sign up for a specific subscription which defines what the quotas for the UP are. PaaS developers explicitly choose how many FFVSs their application should be deployed to and what services it will be using.

While the FFVS compute quotas are published it remains difficult to accurately specify precisely how much compute a PaaS application is going to need, especially if it starts out as a high volume application (i.e., a migration from another service). If the PaaS application is oversubscribed auto-scaling will mitigate the situation, however, this use case is not what auto-scaling was designed for and is not optimal from a cost or provisioning perspective.


Most applications that are deployed to FatFractal are green field apps that typically have little to no load to start with. With these types of applications there is sufficient lead time where analytics can be collected and reconciled against the UP. Once compute usage has hit certain thresholds the developer is notified that they should upgrade their subscription or they should allocate additional FFVSs.

There is another class of application that is not a greenfield but rather a migration from another platform (i.e., Google App Engine) that may generate huge amounts of load once the switch is fully flipped. If the developer knows the compute characteristics (i.e., the app requires 2*2.6 Ghz worth of CPU) of the application then it is relatively straightforward to formulate a reasonable UP, however, this is generally not the case The challenge with this situation is to define a UP with  a sufficient amount of compute up front to accommodate the compute usage needs of the application but not over charge the developer or impact the users of the application.

This can be done three ways:

  1. By over provisioning and collecting application compute usage metrics over some period of time and later making the deployment adjustments and redefining the UP.
  2. By under provisioning and collecting the usage analytics over some period of time and relying on auto-scaling to mitigate spikes in load and later making deployment adjustments and redefining the UP.
  3. By provisioning minimal compute (i.e., a single FFVS) and having the developer partially open the spigot and collecting usage analytics over some period of time and later making the deployment adjustments and redefining the UP based on some multiple of the number of requests for a certain corpus of users over a period of time.

With all three options it is preferable to work closely with the developer which FF recommends and generally does. Ultimately the goal is to provide the developer with the information and tools they need to do it themselves.

All three methodologies will work and are optimum given certain use cases. IMHO option 3 is the preferred way to do it but unfortunately not all (very few) migration scenarios are actually in a position to take advantage of this approach. Independent of the which methodology is employed application compute usage metrics are critical to scoping the final UP and making adjustments to it over time.

Next I will cover a real use case where option 3 was employed.

Introducing Hubi

Hubi is a very cool mobile application that was developed by Megadevs. It is available both on Android and iOS and has 500,000+ users all over the globe. The application was recently awarded the best movie streaming Android app by heavy.com.

The Servlet back-end for the application was originally deployed onto a hosting provider. FF was approached by one of the Megadevs developers (Dario Marcato …a great guy BTW) at AppsWorld  2013 to discuss migrating Hubi from the hosting provider to FF, a couple months later the journey began.

Hubi  generates significant request load but is unique in that spikes every day at about the same time and is CPU bound based on the request type. Below is a table that shows the number of requests, users, and cpu seconds (which won’t mean too much yet) per month since 05/12/2013 thru 09/17/2013 to give you an idea of what its load is.


Users (unique)


CPU (seconds)





















Hubi was originally provisioned onto one FFVS and for the month of May where there were a limited number of users and things worked out fine. We profiled the compute usage with the analytics we had collected up to this point and formulated a UP based on the full corpus of users (approximately 500,000).  We provisioned for that UP and for twenty one (21) hours a day things went smooth but between the hours of 1pm PST and  3pm PST we would experience load issues where the instance CPU usage would hit 80%+  and result in request timeouts. We then profiled Hubi on an hourly basis across the month of June with the analytics we had collected and observed a spike that occurred every day between the hours of 1pm PST and 3PM PST. At this point we could simply adjust the UP and add n-number of more FFVSs where the compute usage for the additions would be used approximately three (3) hours a day. While the aforementioned plan was simple it was not palatable from a cost perspective since there is an incremental cost associated with each additional FFVS. So we decided to leverage an FF scaling feature where we predictively spin up n-number of FFVSs at a scheduled time and then tear them down once the time allotment has been hit. We then charge for the cummulative hours which effectively amounted to the addition of one (1) FFVS. Hubi has been running with this UP for approximately 1.5 months and there have been no load issues.

I apologize in advanced for the diagram below, I am still ramping up on the nuances of RRDTool (which I do like). The Y-axis is the number of CPU seconds and request counts. The X-axis is the hours on 7/31/2013. The times on the X-axis are UTC times (or 8 hours head of PST). If I were to provide a diagram for every day it would be a carbon copy of what you see in the diagram below. You’ll observe that at about 19:00 (1pm PST) things really start to ramp up. The traffic is effectively a combination of two request types, one of which is extremely CPU intensive. This specific request type ultimately drives the CPU seconds above the request count. This can be specifically attributed to the request type distribution. You’ll notice that in most of the hours the CPU seconds consumed is well below that of the request count. That is because very few users are actually invoking the culpable request type.

The spikes in CPU seconds at 7:00 and 9:00 are not normal and I am still running mapreduce jobs to try and understand that data at this writing. The bad news is I am unsure what they represent but the goods new is I would not even be aware of them unless we had application compute usage metrics.

So at the end of the day we were able to minimize megadevs cost by formulating a UP that represents the load for twenty one (21) hours plus the addition of one (1) FFVS to accommodate the load between 1pm PST and 3PM PST




At FF we knew application compute usage metrics would be necessary for bookkeeping activities such as billing and intuitively we believed that these analytics would be critical to scaling and managing applications. Hubi and a couple of other applications have validated those assumptions and now the challenge is to deliver the information and tools to the developers so that they can get a highly granular viewport into their applications and can scale and manage them in the most informed manner possible.

Telluride and Application Analytics

Well the Telluride Film Festival is winding down today (Monday 09/02/2013) and now it is time to mine the compute usage analytics that we have collected.


This is the second year that FatFractal has hosted the Telluride back-end and things went very smoothly, due in large part  to planning based on last year’s fuzzy compute usage analytics. Last year we had no idea what to expect in terms of load and sat on the edges of our collective devops seats as we watched the traffic increase each day. The Telluride Film Festival is a five day event that builds up momentum across the week as film enthusiasts arrive to the festival. FatFractal (specifically Dave Wells) worked in conjunction with Pete Nies to develop clients for iOS, Android, and the browser that provide functionality that truly helps the film goer optimize their Telluride experience.

Some example functionality is:

  • Seat availability.
  • Book signing schedule.
  • Film schedule.
  • Guest directors.

Below are some iOS and Android screenshots:



The Telluride back-end data must be updated periodically across the five days on live production systems and that load must be factored into the planning. The data (like most data models) resides in multiple collections and consists of both objects (JSON) and blobs that are related in some manner which is facilitated through really cool FatFractal NoServer features. In addition the Telluride back-end easily integrated with Salesforce (for seating availability) using the NoServer Server Extensions


Last year we served the Telluride back-end up off a heavily multi-tenanted EC2 m1.xlarge instance and it did the job. This year we served the Telluride back-end off of two heavily multi-tenanted EC2 m1.large instances for redundancy purposes and the traffic was load balanced to the instances by our directors. We have far more apps on the platform now than we did this time last year. So we figured given the normal loads on the two EC2 m1.large instances and last year’s Telluride loads (wish we had real analytics back then) we should be able to accommodate this years load with some head room (fingers crossed).

Below are some screenshots of the instance loads. The two instances are represented by the green and blue lines. The Telluride film festival started 08/29/2013 and ended 09/02/2013.

It should be noted that at the start of the festival (see the spikes) we uncovered a bug that affected CPU utilization (that ever elusive monster query) that was fixed in about an hour by our resident guru, Gary Casey.




As you can see the instances easily handled the load and our assumptions based on last year’s fuzzy compute usage analytics were somewhat validated. Unfortunately last year we were not collecting application-level metrics and relied heavily on information from our logs and extrapolated what we could. Given the graphs above we could have squeezed more out of the instances but  without fine-grained analytics we did not want to take the risk.

Application Analytics

What application compute usage analytics allow us to do is determine what percentage of an instance’s load is being consumed by each respective application on that instance. So if I wanted to scale the application to another instance, I would know approximately how much compute must be available on that instance or take the simplistic route and spin up the appropropriate EC2 instance type.

In the table below you can see two applications, telluride and an unnamed app we’ll call ‘anon’, they both reside on the same instance. The metrics have been collected across the dates 08/29/2013-09/02/2013. So I have their relative compute usage and can determine how much they contribute to the total instance load.

I should note that the telluride aggregates have actually been collected across two instances but the analytic records contain the instance id so that I can aggregate across one or more instances, the table is just an example of what application analytics get collected.

Below is a screenshot of the Telluride API calls and response times from 08/29/2013-09/02/2013 across both EC2 m1.large instances. Application analytics provides a fine grained view port that can be drilled down on to determine precisely what the compute usage of any application is.


Application analytics is critical to any multi-tenanted BaaS or PaaS environment. It is the information necessary to accurately profile an application’s  compute usage such that it can be properly scaled in a predictive fashion. In addition application analytics is the tool by which the infrastructure can be utilized in the most efficient manner possible allowing for optimal multi-tenancy and ultimately a lower cost to the developer and enterprise.

We will be far more informed for next year’s Telluride Film Festival with the application analytics we captured this year and really look forward to next year’s event!



FYI – Having fun with datagraphs

So most of our sample code on GitHub is a result of a developer’s question that we think has general interest that other developers may find useful. This usually ends up also being a small sample application that will show the result in a simple UI.

But, sometimes we do things just for fun. For today’s example, I decided to code up the excellent descriptions on the NoServer features page

You can access the source code for the sample application here.
You can access the sample application – complete with data, images and all here.

It is interesting that now that it is so easy to create an app with a backend, that one can create the code (couple of hours), wrap it in an app (less than a couple of hours) and then blog about it.

So what this app does:

This application uses a single Objecttype (Person) and a single Collection (Persons) to demonstrate just how powerful a datagraph representation can be, and how super efficient queries can be constructed to “walk” your datagraph to get what you want with a single call in your client.


First, defining the backend in FFDL takes only two lines of code!

CREATE OBJECTTYPE Person (firstName STRING, lastName STRING, gender STRING, mother REFERENCE /Persons, father REFERENCE /Persons, siblings GRABBAG /Persons, picture BYTEARRAY)

The REFERENCE associations above (mother and father), these can be considered “one to one” relationships between person objects.

The GRABBAG association (siblings) are essentially “one to many” relationships between person objects.

The system also creates reverse relationships for all of these automatically called BackReferences that provides an extremely powerful means of “walking” your datagraph.


The first example shows how to retrieve an object using a REFERENCE. We first get the Person with firstName “Bart” and then get the Person referenced by the object member “father”.

var bart, homer;
ff.getObjFromUri("/Persons/(firstName eq 'Bart')/father", function(response) {
homer = response;
//or, if bart has already been loaded on the client
homer = bart.father;
You can see References working here.

Grab Bags

This next example shows how to retrieve a set of objects from a Grab Bag by reference. We want to get all of Bart’s aunts on his mother’s side. To do this, we will first get the Person with firstName “Bart”, then get Marge from the “mother” member Reference, then get siblings from the Grab Bag referenced by the object member “siblings”.

var grabbags;
ff.getArrayFromUri("/Persons/(firstName eq 'Bart')/mother/()/siblings/(gender eq 'Female')", function(response) {
grabbags = response;
You can see Grab Bags working here.

Back References

This next example shows how to retrieve objects using Back References. There are three tests here – the first gets all BackReferences to “homer”. The second gets all the Person objects from the Grab Back that reference homer by the “siblings” Reference member. The third gets all the Person objects that refer to homer by the “father” Reference member.

var homer, allBackRefs;
ff.grabBagGetAll(homer, "BackReferences", function(response) {
allBackRefs = response;
var homer, allSiblingBackRefs;
ff.grabBagGetAll(homer, "BackReferences.Persons.siblings", function(response) {
allSiblingBackRefs = response;
var homer, allFatherBackRefs;
ff.grabBagGetAll(homer, "BackReferences.Persons.father", function(response) {
allFatherBackRefs = response;
You can see BackReferences working here.


This last example shows how to retrieve objects using some more advanced queries. There are five examples in this section. The first example will get all the Person objects that have a “father” Reference member (Note that the returned values are deduplicated). The second is basically the same, but uses the “mother” Reference member. The third example shows the use of a logical OR to get all of Bart’s granparents. The fourth example is similar, but adds another logical OR to get all of Bart’s grandparents. The last shows a complex query to get all of Ling’s cousins (the team’s favorite query).

var fathers;
ff.getArrayFromUri("/Persons/()/father"), function(response) {
fathers = response;
var mothers;
ff.getArrayFromUri("/Persons/()/mother"), function(response) {
mothers = response;
var grandfathers;
ff.getArrayFromUri("/Persons/(firstName eq 'Bart')/father or mother/()/father"), function(response) {
grandfathers = response;
var grandparents;
ff.getArrayFromUri("/Persons/(firstName eq 'Bart')/father or mother/()/mother or father"), function(response) {
grandparents = response;
var cousins;
ff.getArrayFromUri("/Persons/(firstName eq 'Ling')/father or mother/()/siblings/()/BackReferences.Persons.mother or BackReferences.Persons.father"), function(response) {
cousins = response;
You can see Queries working here.

Get the Entire Datagraph

Lastly, you can actually retrieve the entire datagraph in a single query by specifying the “depth” of the leaf-level items returned. This test will fetch the entire datagraph.

var datagraph;
ff.getArrayFromUri("/Persons/()?depthRef=3&depthGb=3", function(response) {
datagraph = response;
You can see the entire Datagraph here.

Final note: the source code includes a couple of Server Extensions that are used to populate/unpopulate the data for this app. You may find these useful samples, although they are rather brute force and inelegant in their current form.

FYI – FatFractal Makes File Upload Easy

You can access the source code for the sample application here.

Inspired by a recent blog post by Raymond Camden, we decided to show how much easier FatFractal makes it to create an object containing a blob. This example uses our JavaScript SDK in a PhoneGap app, but all of our SDKs feature the same ease of use.

Here’s the relevant code for uploading with FatFractal:

// Not even a little bit complex -- just set the member
var newNote = {
clazz: "Note",
text: noteText
if (imagedata) newNote.picture = imagedata;

ff.createObjAtUri(newNote, "/Note", function(result) {
}, function(error) {
console.log("Oh crap", error);

Here’s the Parse code (from Raymond Camden’s blog):

A bit complex - we have to handle an optional pic save
if (imagedata != "") {
var parseFile = new Parse.File("mypic.jpg", {base64:imagedata});
parseFile.save().then(function() {
var note = new NoteOb();
note.save(null, {
success:function(ob) {
}, error:function(e) {
console.log("Oh crap", e);
}, function(error) {

} else {
var note = new NoteOb();
note.save(null, {
success:function(ob) {
}, error:function(e) {
console.log("Oh crap", e);

(The only other significant change was to switch to reading the image from the filesystem rather than receive it as a base64-encoded string, which is arguably better practice anyway. Check out the full source up on GitHub!)

So, instead of forcing you to create and save a special file object, FatFractal lets you do the natural thing: you set the member, we take care of the rest.

FYI – FatFractal now provides textual term search for NoServer apps!

As a developer, I want to be able to retrieve information from my backend using search technology so that I can create even more powerful queries and get super fast responses.

You can see working code for everything below (here).
You can access the source code for the sample application (here).

We are really excited to announce that we our latest release includes full textual term search for all your data that is stored on FatFractal NoServer applications. This opens up an entirely new set of options for your queries and they are super fast to boot!

How it works:

We have recently started to use elasticsearch – a superb product by the way, and a future post will outline why we switched to it – as one of the data stores for your app’s data. Basically, as before, everything that is stored is fully indexed for you automatically without any configuration of your application or schema required. In addition, our implementation does not impose any additional overhead on any interaction with your data (Create, Read Update or Delete), but it has vastly improved the response times for all queries and we have added full textual term search as well.

How to use it:

We have added two new operators that you can use in your queries, “contains_all” and “contains_any”. These provide textual term search capability for queries that looks for, you guessed it, ANY submitted term matches or ALL submitted terms match.
For more information, you can find the documentation for the FatFractal Query Language (here).


As usual, we have created a sample application to illustrate this, you can try the application (here), and the source code for the application is (here) and also play with the databrowser for the application’ backend (here).

The sample application uses a Movies collection that holds Movie objects. The movie objects have a member called “description” which we will be searching.

The model for Movie is as follows:

function Movie() {
this.title = null;
this.description = null;
this.year = null;
return this;
We have populated the collection with two Movies as shown below:

"title": "The Conjuring",
"description": "Paranormal investigators Ed and Lorraine Warren work to help a family terrorized by a dark presence in their farmhouse. Forced to confront a powerful entity, the Warrens find themselves caught in the most terrifying case of their lives.",
"year": 2013
"title": "Grown Ups 2",
"description": "After moving his family back to his hometown to be with his friends and their kids, Lenny finds out that between old bullies, new bullies, schizo bus drivers, drunk cops on skis, and 400 costumed party crashers sometimes crazy follows you.",
"year": 2013
So, now let’s create a couple of queries that use free text search.

Search using contains_any

First, we will search the Movies collection for any Movie objects whose description member contain any one of two terms (for example, “family” and “terrorized”). The query looks like this:

/Movies/(description contains_any 'family terrorized')
The code looks like this:
function searchAny() {
var eli = document.getElementById('movies-search-any-input');
ff.getArrayFromUri("/ff/resources/Movies/(description contains_any '" + eli.value + "')", function(movies) {
// handle success
}, function(code, msg) {
// handle error
Since “family” is contained in both “The Conjuring” as well as “Grown Ups 2” descriptions, this query will return both objects.

Search using contains_all

Next, let’s use the same search terms, but use the contains_all operator, which will search for any Movie objects whose description member contain both search terms.

/Movies/(description contains_all 'family terrorized')
The code looks like this:
function searchAll() {
var eli = document.getElementById('movies-search-all-input');
ff.getArrayFromUri("/ff/resources/Movies/(description contains_all '" + eli.value + "')", function(movies) {
// handle success
}, function(code, msg) {
// handle error
In this case, the query will return only “The Conjuring” as it’s descrption contains both terms and “Grown Ups 2” does not.

As mentioned above, these operations are extremely fast and efficient as any data that is stored on your backend is fully indexed automatically as soon as it is created or modified.

Have fun!