Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
About me
This is a page not in th emain menu
Published in ISIT 2022, 2022
In this work, we proved new upper bounds and lower bounds for communication over a noisy channel in the presence of a malicious jamming adversary. Our lower bound improves upon the generalized Gilbert-Varshamov bound for general arbitrarily varying channels (AVCs) while the upper bound generalizes the well known Elias-Bassalygo bound (known for binary and q-ary alphabets).
Download here
Published in MlSys, 2024, 2022
In this paper, we addressed optimal gradient compression in distributed training of neural networks. Our proposed algorithm, called L-GreCo, uses dynamic programming to find the optimal layer-wise compression. L-GreCo preserves the model accuracy while providing training-time speed-ups under different compression schemes on multiple tasks and architectures.
Download here
Published in NeurIPS 2025, 1900
Model merging, a method that combines the parameters and embeddings of multiple fine-tuned large language models (LLMs), offers a promising approach to enhance model performance across various tasks while maintaining computational efficiency. This paper introduces Activation-Informed Merging (AIM), a technique that integrates the information from the activation space of LLMs into the merging process to improve performance and robustness. AIM is designed as a flexible, complementary solution that is applicable to any existing merging method. It aims to preserve critical weights from the base model, drawing on principles from continual learning (CL) and model compression. Utilizing a task-agnostic calibration set, AIM selectively prioritizes essential weights during merging. We empirically demonstrate that AIM significantly enhances the performance of merged models across multiple benchmarks. Our findings suggest that considering the activation-space information can provide substantial advancements in the model merging strategies for LLMs, with up to a 40% increase in benchmark performance.
Download here