Authors: Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
Published on: February 03, 2024
Impact Score: 8.45
Arxiv code: Arxiv:2402.02028
Summary
- What is new: Introduction of a novel method to create unlearnable examples for time series data, protecting it from unauthorized deep learning training.
- Why this is important: Existing UE generation methods primarily focus on images, and there’s a lack of effective techniques for protecting time series data.
- What the research proposes: A new kind of error-minimizing noise selectively applied to specific time series segments, making them unlearnable to DNNs but still usable for legitimate purposes.
- Results: The method effectively protects time series data in both classification and generation tasks without harming their utility for authorized use.
Technical Details
Technological frameworks used: Deep Neural Networks (DNNs)
Models used: nan
Data used: Wide range of time series datasets
Potential Impact
This research could impact markets relying on time series data, like finance and healthcare, and companies specializing in data security and machine learning.
Want to implement this idea in a business?
We have generated a startup concept here: TimeGuard AI.
Leave a Reply