Achieving improved performance with simple self-supervised pruning metrics: Discarding 20 percent of ImageNet without sacrificing performance

Recent AI research has found a simple self-supervised pruning metric that allows them to discard 20% of ImageNet without sacrificing performance, beating neural scaling laws via data pruning

By applying neural scaling laws to machine-learning models, which involves increasing the number and size of computations and training data points, you can improve the performance of your model. We should be able reduce the error in testing to a small value since we have more computing power and can collect more data than ever.

This method is not ideal. Despite having enough computing power, scaling is not sustainable due to its high computational costs. To reduce the error from 3,4% to 2,8%, for example, it may require an order-of-magnitude more data, computation or energy. What can be the solution?

Source:

Latest AI Research Finds a Simple Self-Supervised Pruning Metric That Enables Them to Discard 20% of ImageNet Without Sacrificing Performance, Beating Neural Scaling Laws via Data Pruning