Audit , Big Data , DDoS

Wipro Develops Fraud Detection Model

Aims at Improved Risk Management by Using Big Data
Wipro Develops Fraud Detection Model

Bengaluru-based Wipro Ltd, the global IT consulting and business process services company, has developed a new fraud control and anomaly detection platform, Apollo, to help enable organizations address challenges in managing fraud, risk and compliance. Apollo relies on real-time anomaly detection through surveillance of big data analytics and machine learning.

See Also: Hide & Sneak: Defeat Threat Actors Lurking within Your SSL Traffic

Apollo is a patent-pending platform based on a bank of 350 algorithms offering end-to-end control, from data ingestion to reporting and case management.

R Guha, Wipro's head, corporate business development, says new research finds up to 5 percent of organizational total revenue could be lost to fraud.

"A fraud gets detected after 18 months; many go undetected," Guha says. If frauds result from collusion between two groups with senior-level perpetrators, its impact is greater and detection lower. So, we've enabled the Apollo platform to be customized and used for enterprise functions like regulatory compliance, workforce policy compliance, procurement and payments and resource management, to reduce occupational fraud to some extent."

Coimbatore-based S N Ravichandran, a cyberfraud investigator and member of the National Cyber Association of India, says that given the increase in fraud (whose estimated annual cost to Indian organizations is more than $1 billion), a framework controlling these incidents is a must.

"However, it must be legal, non-intrusive and done with the employee's knowledge," Ravichandran says. "Practitioners must observe caution while sourcing data from multiple applications."

Need for a Framework

Before Apollo, there've been solutions from IBM, TCS, SAS, Zoho and others, featuring fraud detection models based on big data analytics. Most have been domain-specific, following a particular business logic.

By leveraging big data, says Guha, Apollo helps organizations move from a reactive posture to a rule-based regime, with pattern-based controls allowing risk and compliance stakeholders keep checks on violations.

Apollo is built around:

  • Adaptive learning, helping organizations move from rule-based anomaly detection to predicting and prioritizing fraud;
  • Pre-built rule library and algorithms, battle-tested for varied scenarios and industries;
  • End-to-end ownership from data ingestion to reporting, including case management.

Deepak Jain, senior vice president and head, internal audit at Wipro, who's been using traditional ACL tools for audit, risk and data analysis, says he needs a platform addressing fraud challenges. His challenge is not about data availability, but the platform's ability to handle data volume, analyze the quality, respond to incidents in real-time, prioritize data and build fraud-detecting expertise.

"The tools should help detect leakage and monitoring all points of control failure related to authorization, access and transmission for IP theft and protecting IP," Jain says. "With correct architecture, it offers prioritization of identified anomalies, facilitating early investigation and proactive detection of potential fraudulent practices."

CISOs seem to respond well to big data analytics baked into the framework. Delhi-based S Sriram Natarajan, chief risk officer-retail banking and cards at Quatrro, a BPO organization, says big data helps track digital fraud. "By itself inconclusive, big data must be integrated with customer data to detect patterns."

Ravichandran says many CISOs aren't competent to analyze big data, profile an individual using data or secure data they've taken.

"Some of the biggest scams occurred following faulty analysis, and worse, analysts and fraudsters colluding," he says.

Fraud Detection Architecture

Apollo is also built for data handling, reports and case management. Its architecture revolves around data sourcing, business models and outcomes for investigations.

As a first step, data's sourced from multiple applications around the frequency of real-time, daily, monthly and ad hoc feeds and resilience to feed discrepancies with an alerting mechanism is developed.

The second step involves business models with rules written around 350 models across modules, while prescribing detection logic around rule-based and ML models and scoring for prioritizing investigation.

The third step involves workflow-driven task allocation and tracking to closure, creating role-based dashboards and reports across browsers and handhelds; incidents are sent to GRC platforms for aggregate view.

"We've used Hadoop and Machine Learning tools in big data to source data and analyze it to provide a mechanism for detecting control failures and performing process control testing based on the number of incidents identified for a specific risk," Guha says.

Ravichandran agrees: "Anomalies are detected by studying transaction patterns over time; deviations are flagged and investigated."

Value Proposition

Experts do not recall any major success with the existing analytical tools. However, Sriram says, "Any model succeeds if it adapts to dynamic situations, is agnostic to open source/licensed software tools and allows for manual review of exceptions in real-time."

"The issue would be when the open source tools are not integrated with in-house BI tools to detect and analyze data," he says.

Some agree this framework will drive immense benefits and promises a certain value proposition.

Jain says it will succeed if it reduces risk, enhances process efficiency, payment recovery and direct effort savings.

Some say using big data tools will enable process and systemic controls driven by analysis of incidents - redundant checks can be eliminated; enhanced intelligence through a common and consistent view of data and red-flags is ensured.

Guha says customers will see a 40 to 50 percent reduction in compliance efforts as the focus shifts from routine data management to refining detection algorithms and driving policy and systemic fixes.

Security leaders recommend a well-planned proof-of-concept initiated at a customer place that can access data owned by the corporate and acquired legally as part of its statutory requirement as a pilot.

However, Ravichandran cautions, "Investigators must remember human nature is unpredictable. Any conclusion is only a pointer; hard evidence must support action."


About the Author

Geetha Nandikotkur

Geetha Nandikotkur

Managing Editor, Asia & the Middle East, ISMG

Nandikotkur is an award-winning journalist with over 20 years' experience in newspapers, audio-visual media, magazines and research. She has an understanding of technology and business journalism, and has moderated several roundtables and conferences, in addition to leading mentoring programs for the IT community. Prior to joining ISMG, Nandikotkur worked for 9.9 Media as a Group Editor for CIO & Leader, IT Next and CSO Forum.




Around the Network