This is an academic idea that may have a use case. It is a different approach to storing and transferring XBRL and developing a fast and flexible API for XBRL tool development. I would appreciate comments on applicability and any possible use cases for the concept.

The Transporter Concept

The Transporter moves XBRL from point A to point B. It does not know or care how the data is stored on either end. For instance, it can transport from a file system (US-GAAP or IFRS) to a database or from a database back to a file system (instance/extensions). It can move or copy from one database to another regardless of the nature of the database, whether it be Oracle, MySQL, Mongo, Hive, SOLR or most anything similar regardless of SQL or NOSQL. The Transporter can move or copy from/to databases anywhere on the planet. It is a unique architecture.

Distributed XBRL Databases

A distributed model means that we are going to store XBRL in more than one database. These could be internationalized meaning that they could be located in more than one country. We want to be able to merge or split taxonomies/instances as we choose across multiple database nodes. The computer concept for this is known as grape vining. Any node can talk to any other node. We care not about the medium in which data is stored. Mediums may be relational as in SQL Server or Oracle or NOSQL as in SOLR or Hive/Hadoop or some other form of filing system storage.


Let’s use an analogy. You go into a restaurant. The server gives you a menu. You choose one or more items from the menu. The server disappears behind some double doors. A while later, the server brings you what you asked for. You don’t know or care where it came from. You have no idea what’s behind those double doors or who or how the food was prepared. You simply ask for something and you get it.


That is our technology model. Our client software doesn’t know or care how the data is stored. A client simply asks for something and it gets it. It can merge or split or redistribute XBRL taxonomies/instances at will. It can do this either domestically or across international borders. We accomplished these goals. Our storage mechanism is something we call the Gumby approach. It is flexible enough that we can adapt to most any situation.

USB Plug and Play

Plug and play in this context refers to software modules that can be interchangeably networked together or pulled apart as required. This similar to a USB connector. In the old days, if you wanted to connect a hard drive to your machine you would have to pull up the bios and then indicate the number of cylinders and sectors applicable to the drive. The USB standard provides plug and play. You simply plug the device into the machine. The device says: here I am and this is what you need to know about me. The user doesn’t know or care about the interaction. All he/she knows is that they plugged in a device and it works.


Imagine using the same concept using XBRL. This means being able to set up interchangeable connections to one or more XBRL data source to another. XBRL data including both taxonomies and facts could be moved or copied as desired. It is a concept also known as grape vining. Servers connect to each other in a hologram similar to file sharing programs.


Using dimensions, cubes and axis could be strategically split across servers. Roletypes could be distributed across servers. Perhaps putting all balance sheets on one server and income statements on another.


In a plug and play environment, client software does not know or care about where data is coming from. The client performs the same regardless of whether data is stored in Oracle, SQL Server, Big Data or XBRL files. We can connect two or more data sources and interchange facts and taxonomies. It does not matter if one source is Oracle and the other Big Data or XBRL files.


A plug and play model does not import/export from/to instance documents and extensions in the traditional sense. It sees all data sources as the same. Producing an instance document from SQL Server for instance is simply: copy data from SQL Server to the XBRL file system or the reverse: copy the XBRL file system into SQL Server. The transporter doesn’t know or care which is which. The same code handles either.


The Transporter program handles it. We do this by setting up connections and transferring from A to B. The Transporter doesn’t know or care about where the data is coming from or going to. It simply reads from A and writes to B as in beam me up Scotty.

Fitness for Use

Basic storage mechanisms consist of either a relational database (Oracle, SQL Server, Sybase) or big data (Hive, Hadoop) or a file system-based storage mechanism such as XBRL. Each has pros and cons when considering fitness for use. A main consideration is the concept of transactions. Relational databases support transactions. Big data and file-based storage mechanisms do not.

An XBRL filing program handles lots of events under the hood. If the user is building a roletype, for instance, any drag and drop of the mouse may have a dozen events going on in the background, from creating and modifying arcs to setting and changing object relationships.


For this type of XBRL product, we would use a database that supports transactions. Relational databases do this. Big Data does not. If an error occurs using big data, part of the transaction would succeed and part would fail. Fixing problems caused as a result would be difficult.


But big data is very fast and can store unlimited volumes of data. It is optimal for analytic programs. Relational databases would be slower, and relational databases have finite limits as to the amount of data they can store in any specific database.


An XBRL file-based filing program stores its data in memory at runtime. If the system fails, you lose your work without corrupting your data source. An inconvenience, but not necessarily a disaster.


This means even though our plug and play can connect to and work with most anything, we must consider if we want it to, based on applicability.


Plug and Play XBRL can internationalize the user community. Imagine working with US-GAAP documents in the United States and then switching to the IFRS in the Netherlands or any other taxonomy in any other country all the while using the same client program that the user is familiar with.


Imagine running analytical reports using the same XBRL engine used for filings. Imagine connecting to block chain from the same application.

The Bow Tie Architecture

I call plug and play the bow tie architecture. On the right side are plug and play data sources. On the left side are XBRL client programs. The knot in the middle is an XBRL engine that drives all of it (the bow tie pix in the middle did not show in the yahoo posting).



XBRL Filing Products

XBRL Analytic Tool Products

XBRL Validation Product



SQL Server


Big Data

XBRL Files

JSON Standard

ESMA Standard

Block Chain


The bow tie architecture allows users to mix and match products with data sources seamlessly. Most everything is reusable and development time and costs are minimalized. For instance: if we were to display a roletype to the user, we have the option of using a different skin in different parts of the application, or make it read only for analytic purposes. By setting property attributes, a single client-based display roletype module can be reused for a variety of products and appearing different to the user each time, while being the same under the hood. That roletype display module can then be connected to a variety of data sources each for a variety of purposes.


For example: we have an XBRL validation module in development. It is multi-threaded and very fast. It updates a very nice user interface in real time as information is generated and it uses the USB plug and play architecture. This means it can interchangeably validate from most any data source whether it be XBRL files, a relational database or big data without knowing or caring about which.


The same may also apply to a formula linkbase module, regardless of whether it is used in report creation or report consumption. The formula processor does not need to know or care about where data is coming from. It simply executes formulas and assertions as necessary, thus providing for optimal reusability.