Authors: Yuchen Zhou, Emmy Liu, Graham Neubig, Michael J. Tarr, Leila Wehbe
Published on: November 15, 2023
Impact Score: 8.22
Arxiv code: Arxiv:2311.09308
Summary
- What is new: This paper uniquely identifies domains not well-represented in language models (LMs) compared to human brain responses, specifically social/emotional intelligence and physical commonsense.
- Why this is important: While recent studies suggest similarities between machine and human language processing, significant differences still exist, particularly in how LMs and humans represent and use language.
- What the research proposes: The researchers propose fine-tuning LMs with data enriched in social/emotional intelligence and physical commonsense to better align their representations with human brain responses.
- Results: After fine-tuning, LMs showed improved alignment with human brain responses as measured in narrative listening and reading tasks.
Technical Details
Technological frameworks used: Data-driven analysis based on Magnetoencephalography (MEG)
Models used: Language Models (LMs)
Data used: Two datasets from subjects reading and listening to narrative stories
Potential Impact
This research could impact edtech companies by enhancing machine learning models used in educational tools, and industries relying on natural language processing technology for better human-computer interaction.
Want to implement this idea in a business?
We have generated a startup concept here: Cognitext.
Leave a Reply