Skip to content

Updates #310

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 25, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 19 additions & 2 deletions source/_data/SymbioticLab.bib
Original file line number Diff line number Diff line change
Expand Up @@ -2009,7 +2009,7 @@ @Article{curie:arxiv25

@Article{cornstarch:arxiv25,
author = {Insu Jang and Runyu Lu and Nikhil Bansal and Ang Chen and Mosharaf Chowdhury},
title = {Cornstarch: Distributed Multimodal Training Must Be Multimodality-Aware },
title = {Cornstarch: Distributed Multimodal Training Must Be Multimodality-Aware},
year = {2025},
month = {March},
volume = {abs/2503.11367},
Expand All @@ -2024,4 +2024,21 @@ @Article{cornstarch:arxiv25
Multimodal large language models (MLLMs) extend the capabilities of large language models (LLMs) by combining heterogeneous model architectures to handle diverse modalities like images and audio. However, this inherent heterogeneity in MLLM model structure and data types makes makeshift extensions to existing LLM training frameworks unsuitable for efficient MLLM training.
In this paper, we present Cornstarch, the first general-purpose distributed MLLM training framework. Cornstarch facilitates modular MLLM construction, enables composable parallelization of constituent models, and introduces MLLM-specific optimizations to pipeline and context parallelism for efficient distributed MLLM training. Our evaluation shows that Cornstarch outperforms state-of-the-art solutions by up to 1.57x in terms of training throughput.
}
}
}

@Article{ai-eval-framework:arxiv25,
author = {Sarah Jabbour and Trenton Chang and Anindya Das Antar and Joseph Peper and Insu Jang and Jiachen Liu and Jae-Won Chung and Shiqi He and Michael Wellman and Bryan Goodman and Elizabeth Bondi-Kelly and Kevin Samy and Rada Mihalcea and Mosharaf Chowhury and David Jurgens and Lu Wang},
title = {Evaluation Framework for {AI} Systems in the Wild},
year = {2025},
month = {April},
volume = {abs/2504.16778},
archivePrefix = {arXiv},
eprint = {2504.16778},
url = {https://arxiv.org/abs/2504.16778},
publist_confkey = {arXiv:2504.16778},
publist_link = {paper || https://arxiv.org/abs/2504.16778},
publist_topic = {Systems + AI},
publist_abstract = {
Generative AI (GenAI) models have become vital across industries, yet current evaluation methods have not adapted to their widespread use. Traditional evaluations often rely on benchmarks and fixed datasets, frequently failing to reflect real-world performance, which creates a gap between lab-tested outcomes and practical applications. This white paper proposes a comprehensive framework for how we should evaluate real-world GenAI systems, emphasizing diverse, evolving inputs and holistic, dynamic, and ongoing assessment approaches. The paper offers guidance for practitioners on how to design evaluation methods that accurately reflect real-time capabilities, and provides policymakers with recommendations for crafting GenAI policies focused on societal impacts, rather than fixed performance numbers or parameter sizes. We advocate for holistic frameworks that integrate performance, fairness, and ethics and the use of continuous, outcome-oriented methods that combine human and automated assessments while also being transparent to foster trust among stakeholders. Implementing these strategies ensures GenAI models are not only technically proficient but also ethically responsible and impactful.
}
}
8 changes: 8 additions & 0 deletions source/_posts/Mosharaf-Q-A-Michigan-Theater.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: >-
Mosharaf Delivered a Public Lecture and Q&A on Energy-Optimal AI at the Michigan Theater
categories:
- News
date: 2025-04-22 23:05:10
tags:
---