Understanding the Advice of Commissions-Motivated Agents: Evidence from the Indian Life Insurance Market

Last registered on December 16, 2015

Pre-Trial

Trial Information

General Information

Title
Understanding the Advice of Commissions-Motivated Agents: Evidence from the Indian Life Insurance Market
RCT ID
AEARCTR-0000985
Initial registration date
December 16, 2015

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 16, 2015, 12:46 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
HBS

Other Primary Investigator(s)

PI Affiliation
Harvard University
PI Affiliation
The University of Pennsylvania- The Wharton School
PI Affiliation
Harvard Business School

Additional Trial Information

Status
On going
Start date
2010-01-05
End date
2016-01-31
Secondary IDs
Abstract
We conduct a series of field experiments to evaluate the quality of advice provided by life insurance agents in India. Agents overwhelmingly recommend unsuitable, strictly dominated products, which provide high commissions to the agent. Agents cater to the beliefs of uninformed consumers, even when those beliefs are wrong. We also find that agents appear to focus on maximizing the amount of premiums (and therefore commissions) that customers pay, as opposed to focusing on how much insurance coverage customers need. A natural experiment requiring disclosure of commissions for a specific product results in agents recommending alternative products with high commissions but no disclosure requirement. A follow-up agent survey sheds light on the extent to which poor advice reflects both the commission incentives as well as agents’ limited product knowledge.
External Link(s)

Registration Citation

Citation
Anagol, Santosh et al. 2015. "Understanding the Advice of Commissions-Motivated Agents: Evidence from the Indian Life Insurance Market." AEA RCT Registry. December 16. https://doi.org/10.1257/rct.985-1.0
Former Citation
Anagol, Santosh et al. 2015. "Understanding the Advice of Commissions-Motivated Agents: Evidence from the Indian Life Insurance Market." AEA RCT Registry. December 16. https://www.socialscienceregistry.org/trials/985/history/6352
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
This was an audit study, which involved hired staff visiting insurance salesmen and women and inquiring about products available for sale. The various “interventions” were the scripts followed by the audit staff, and consisted of: mentioning a preference for a particular type of product; mentioning whether they had visited other insurance agents to inquire about products; expressing high or low levels of sophistication; and describing their own characteristics and needs to indicate suitability for different types of insurance.

Intervention by Experiment:

Quality of Advice refers to the experiment where we varied the auditor's needs, beliefs, and the source of their beliefs (competing agent or friend).

Disclosure refers to the experiment where we varied whether the auditor made a disclosure inquiry, both before and after the mandatory disclosure law, to test the law's effect on agent behavior.

Sophistication refers to the experiment where we varied the auditors' expressed financial sophistication.
Intervention (Hidden)
Intervention Start Date
2010-01-10
Intervention End Date
2014-06-06

Primary Outcomes

Primary Outcomes (end points)
Type of insurance ultimately offered by the agent
-Term or whole

Level of coverage
Size of premium
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
OVERALL DESIGN

In this section we describe the basic experimental setup common to all experiments we ran in this study. Auditors were recruited via the employee networks of the Center for Microfinance (CMF), with the goal of recruiting reliable, capable individuals who would be able to conduct the audits effectively.
The audit team was led by a full-time audit manager, who had previously worked managing a financial product sales team for an international bank. This employee, along with a principal investigator, provided intensive introductory training on life insurance. Each auditor was subsequently trained in the specific scripts they were to follow when meeting with the agents. Each auditor's script was customized to match the auditor's true-life situation (number of children, place of residence, etc.). However, auditors were given uniform and consistent language to use when asking about insurance products and seeking recommendations. Auditors memorized the scripts, as they would be unable to use notes in their meetings with the agents.
Following each interview, auditors completed an exit interview form immediately, which was entered and checked for consistency. The auditors and their manager were told neither the purpose of the study nor the specific hypotheses we sought to test. Auditors were instructed not to lie during any of the sessions. The audit process was designed to mimic customer behavior as much as possible, and allow our auditors to act naturally. The audits scripts were written by a former life insurance salesperson, with the goal of representing typical transactions.
Following pilots, we ran a series of experiments to understand under what circumstances advice might improve. In each experiment, treatments were randomly assigned to auditors, and auditors to agents. Note that because the randomizations were done independently, this means that each auditor did not necessarily do an equivalent number of treatment and control audits for any given intervention of interest. Since we were identifying agents as the experiment proceeded, we randomized in daily batches. To ensure treatment fidelity, auditors were assigned to use only one particular treatment script on a given day.
Life insurance agents were identified via a number of different sources, most of which were websites with national listings of life insurance agents. Contact procedures were identical across the treatments. While some agents were visited more than once, care was taken to ensure that no auditor visited the same agent twice, and to space any repeat visits at least four weeks apart, both to minimize the burden on the agents and to reduce the chance that the agent would learn of the study. At the experiments' conclusion, auditors were offered a bonus which they could use towards purchasing a life insurance plan of their own choosing.
THE THREE EXPERIMENTS
Below we describe the audit counts from our three experiments, disaggregated by treatment combinations. The first column provides the total number of audits for each treatment combination, the second column provides the total number of auditors involved for each treatment combination, and the third column provides the number of distinct agents visited for each treatment combination. The fourth column indicates the mean of the main dependent variable, by treatment assignment, for each experiment. a) Since agents may have been visited by more than one auditor, the number of agents visited is less than the total number of audits.



Experimental Design Details

Panel A: Quality of Advice (City #1)
By need, belief, and source of beliefs (competition) Audits Auditors Agents Term Recommended
Need Term Bias Term Recommendation from other agent 61 4 57 0.26
Need Term Bias Term Recommendation from friend 65 4 61 0.25
Need Term Bias Whole Recommendation from other agent 57 5 53 0.19
Need Term Bias Whole Recommendation from friend 75 4 70 0.09
Need Whole Bias Term Recommendation from other agent 77 4 70 0.12
Need Whole Bias Term Recommendation from friend 77 4 71 0.12
Need Whole Bias Whole Recommendation from other agent 68 4 62 0.01
Need Whole Bias Whole Recommendation from friend 77 5 73 0.03
Totala 557 304
Panel B: Disclosure Experiment (City #2)
By timing and whether auditor inquired about commission Audits Auditors Agents ULIP Recommended
Ask about commission Pre-Disclosure Requirement 82 4 67 0.85
Ask about commission Post-Disclosure Requirement 61 3 58 0.54
Do not ask about commission Pre-Disclosure Requirement 67 4 54 0.81
Do not ask about commission Post-Disclosure Requirement 47 3 40 0.57
Total 257 198
Panel C: Sophistication Experiment (City #2)
By level of sophistication Audits Auditors Agents Term Recommended
Low level of sophistication 114 7 110 0.18
High level of sophistication 103 6 103 0.27
Total 217 209
Randomization Method
Randomization done in office by a computer
Randomization Unit
In each experiment, treatments were randomly assigned to auditors, and auditors to
agents. Since we were identifying agents as the experiment proceeded, we randomized in daily batches.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Our quality of advice, disclosure, and sophistications experiments had respectively 557 audits and 304 agents; 257 audits and 198 agents; 217 audits and 209 agents.
Sample size: planned number of observations
Our quality of advice, disclosure, and sophistications experiments had respectively 557 audits and 304 agents; 257 audits and 198 agents; 217 audits and 209 agents.
Sample size (or number of clusters) by treatment arms

Panel A: Quality of Advice (City #1)
By need, belief, and source of beliefs (competition) Audits Auditors Agents Term Recommended
Need Term Bias Term Recommendation from other agent 61 4 57 0.26
Need Term Bias Term Recommendation from friend 65 4 61 0.25
Need Term Bias Whole Recommendation from other agent 57 5 53 0.19
Need Term Bias Whole Recommendation from friend 75 4 70 0.09
Need Whole Bias Term Recommendation from other agent 77 4 70 0.12
Need Whole Bias Term Recommendation from friend 77 4 71 0.12
Need Whole Bias Whole Recommendation from other agent 68 4 62 0.01
Need Whole Bias Whole Recommendation from friend 77 5 73 0.03
Totala 557 304
Panel B: Disclosure Experiment (City #2)
By timing and whether auditor inquired about commission Audits Auditors Agents ULIP Recommended
Ask about commission Pre-Disclosure Requirement 82 4 67 0.85
Ask about commission Post-Disclosure Requirement 61 3 58 0.54
Do not ask about commission Pre-Disclosure Requirement 67 4 54 0.81
Do not ask about commission Post-Disclosure Requirement 47 3 40 0.57
Total 257 198
Panel C: Sophistication Experiment (City #2)
By level of sophistication Audits Auditors Agents Term Recommended
Low level of sophistication 114 7 110 0.18
High level of sophistication 103 6 103 0.27
Total 217 209
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
n/a
IRB

Institutional Review Boards (IRBs)

IRB Name
Committee on the Use of Human Subjects at Harvard University
IRB Approval Date
2010-01-08
IRB Approval Number
F18557

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials