About Us

Jumpstart your enterprise big data initiative leveraging these ready-to-deploy solutions

Solutions

Apervi Conflux offers business accelerators in the form of ready-to-deploy solutions. These solutions help enterprises jumpstart their big data initiatives and are aimed at solving commonly seen use cases today. They include the following:

EDW Offloading on Hadoop

Solution includes data governance, and built-in support for incremental updates, retries and late arriving data

More >>

OLTP replication on Hadoop

To support real-time operational reporting, and offline storage to enable deep analytics

More >>

Real-time Log Analytics

This includes collection, aggregation and storage in Hadoop, with support for analysis and alerting

More >>
 

EDW Offloading on Hadoop

Enterprise Data Warehouses (EDW) has been the core of most companies for many years. With the explosion of data growth, the EDW capacity is being stretched and getting very expensive. Infrequently used data is taking up too much of this expensive capacity. Load processing windows are being consumed, adversely affecting service levels, and threatening the delivery of critical business insights run from scheduled and overnight batch jobs.

EDW Offloading on Hadoop

Leveraging Hadoop to offload some of this storage and processing offers significant benefits – much lower storage costs and faster processing times, with anywhere from 5X to 50X savings. With Apervi’s Datawarehouse Offloading solution (ADOS), enterprises can automate these workflows - ETL, SQL ELT and post-Hadoop transformation back to EDW or Analytics, offering a way for businesses to jumpstart their EDW modernization. Leveraging Apervi Conflux’s solution can save enterprises 4X to 8X in effort required to create and manage such data workflows.

Our solution provides pre-built workflows out-of-the-box, which users can quickly configure and update for their enterprises. The workflows include the following - core ETL to load appropriate data into the EDW from various data sources, structured or unstructured; ELT to bulk load data onto Hadoop, and also load ongoing ‘cold’ data from EDW to Hadoop, all with support for retries and late arriving data; to prepare data for analytics from Hadoop using the platform without having to write any HIVE queries.

OLTP Replication on Hadoop

Operational data is critical to run and manage business tasks. Typically they involve short and fast inserts, need to run simple queries returning few records at a time, but need to be processed very fast. Backup and recovery requirements are stringent ensuring ‘no’ data loss.

Existing operational databases like RDBMS and NoSQL are straining under the weight of the volume and velocity associated with Big Data. Traditional ETL pipelines are creating a bottleneck for applications and analysts who rely on the need for timely data. Apervi’s Replicated OLTP Solution (AROS) can significantly optimize the ETL pipeline setup and maintenance and enable real-time operational reporting. In addition, enterprises can build a maintainable replication mechanism to Hadoop to enable deeper analytics later, and also have both analytical and transactional data available on Hadoop to further scale up and out, with the RDMBS capabilities as traditional RDBMS.

Using our solution, users can quickly get started – configuring pipelines to capture OLTP DB changes in realtime and store in the Hadoop platform. The captured transactional data in Hadoop is made available to analysts and decision makers almost instantaneously for operational reporting. The 24-48 hour wait period for the nightly batch jobs to move data to a data warehouse is completely eliminated.

OLTP Replication on Hadoop

Real-time Log Analytics

System logs, web logs and application logs are very valuable in terms of operational intelligence. For instance, clickstream data can provide insights that can be used to market to customers. System logs can be valuable in everything from monitoring system performance, correlating issues, lowering IT cost, optimizing infrastructure spend to securing a company’s assets.

Real-time Log Analytics

As the scale and complexity of systems have increased, it has become a challenge to manage multiple logs that are distributed across a distributed set of enterprise infrastructure. Terabytes of data, mostly unstructured not-all-the-same-format, need to be indexed in real-time to be useful With Apervi’s Real-time Log Analytics Solution (ARLAS), enterprises can collect logs on a continuous basis from different systems, aggregate it, store it all on Hadoop and make it available for analytics. Data can also be used for alerting and indexed for search.

We offer pre-built connectors to messaging systems like Kafka, RabbitMQ, and search engines like Solr and Elasticsearch, along with pre-built parsers for several syslog formats originating from various networking devices (Routers, Swiches, Proxies, Firewalls, etc.) in an enterprise. Users can jumpstart their log analytics solutions by configuring our solution to collect, aggregate and store log data in real-time. They can configure workflows to process incoming unstructured messages, parse them to structured log formats and leverage search indexing threads for real-time log analysis.

Clients & Partners
Top