Authors: Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Kai Shu, Adel Bibi, Ziniu Hu, Philip Torr, Bernard Ghanem, Guohao Li
Published on: February 07, 2024
Impact Score: 8.12
Arxiv code: Arxiv:2402.04559
Summary
- What is new: Investigating the ability of LLM agents to simulate human trust behaviors accurately.
- Why this is important: Questioning whether LLM agents can realistically mirror human behaviors, specifically trust.
- What the research proposes: Using Trust Games from behavioral economics to study and compare the trust behaviors between humans and LLM agents.
- Results: Found that LLM agents can exhibit trust behaviors similar to humans, with significant behavioral alignment, but with some biases and differences noted.
Technical Details
Technological frameworks used: Trust Games (behavioral economics)
Models used: Large Language Models
Data used: Simulated dialogues and interaction scenarios within the Trust Games framework
Potential Impact
Social sciences research, Behavioral economics, AI development and ethics, Customer service and interaction platforms
Want to implement this idea in a business?
We have generated a startup concept here: Trustify.
Leave a Reply