Skip to content

Commit 437b966

Browse files
update projects to include RINGS
1 parent 1365e78 commit 437b966

File tree

9 files changed

+45
-0
lines changed

9 files changed

+45
-0
lines changed

assets/images/rings.jpg

669 KB
Loading

assets/images/rings.pdf

119 KB
Binary file not shown.

data/homepage.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -223,6 +223,15 @@ testimonial:
223223
enable: true
224224
title: "Projects"
225225
items:
226+
- name: "No Metric to Rule Them All: Toward Principled Evaluations of Graph-Learning Datasets"
227+
position: "<em>Preprint 2025</em>"
228+
content: "Benchmark datasets have proved pivotal to the success of graph learning, and good benchmark datasets are crucial to guide the development of the field. Recent research has highlighted problems with graph-learning datasets and benchmarking practices -- revealing, for example, that methods which ignore the graph structure can outperform graph-based approaches on popular benchmark datasets. Such findings raise two questions: (1) What makes a good graph-learning dataset, and (2) how can we evaluate dataset quality in graph learning? Our work addresses these questions. As the classic evaluation setup uses datasets to evaluate models, it does not apply to dataset evaluation. Hence, we start from first principles. Observing that graph-learning datasets uniquely combine two modes -- the graph structure and the node features -- , we introduce RINGS, a flexible and extensible mode-perturbation framework to assess the quality of graph-learning datasets based on dataset ablations -- i.e., by quantifying differences between the original dataset and its perturbed representations. Within this framework, we propose two measures -- performance separability and mode complementarity -- as evaluation tools, each assessing, from a distinct angle, the capacity of a graph dataset to benchmark the power and efficacy of graph-learning methods. We demonstrate the utility of our framework for graph-learning dataset evaluation in an extensive set of experiments and derive actionable recommendations for improving the evaluation of graph-learning methods. Our work opens new research directions in data-centric graph learning, and it constitutes a first step toward the systematic evaluation of evaluations. Code will be released soon :-)"
229+
image:
230+
x: "images/rings.jpg"
231+
_2x: "images/rings.jpg"
232+
github: "https://github.com/aidos-lab?q=&type=all&language=&sort="
233+
google_scholar: "https://arxiv.org/abs/2502.02379"
234+
226235
- name: "Characterizing Physician Referral Networks with Ricci Curvature"
227236
position: "<em>IPLDSC 2024</em>"
228237
content: "Identifying (a) systemic barriers to quality healthcare access and (b) key indicators of care efficacy in the United States remains a significant challenge. To improve our understanding of regional disparities in care delivery, we introduce a novel application of curvature, a geometrical-topological property of networks, to Physician Referral Networks. Our initial findings reveal that Forman-Ricci and Ollivier-Ricci curvature measures, which are known for their expressive power in characterizing network structure, offer promising indicators for detecting variations in healthcare efficacy while capturing a range of significant regional demographic features. We also present `apparent`, an open-source tool that leverages Ricci curvature and other network features to examine correlations between regional Physician Referral Networks structure, local census data, healthcare effectiveness, and patient outcomes."

public/images/rings.jpg

669 KB
Loading
37.6 KB
Loading
13.2 KB
Loading

public/index.html

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -354,6 +354,42 @@ <h2 class="rad-fade-down">Projects</h2>
354354

355355

356356

357+
<picture>
358+
<source srcset="/images/rings_hu17595626429236166956.jpg 1x, /images/rings_hu14547548789854261136.jpg 2x"
359+
type="image/webp" />
360+
<source srcset="/images/rings.jpg 1x, /images/rings.jpg 2x" type="image/webp">
361+
<img width="270" height="180"
362+
class="lozad img-responsive"
363+
src="data:image/gif;base64,R0lGODlhBwACAIAAAP///wAAACH5BAEAAAEALAAAAAAHAAIAAAIDjI9YADs="
364+
srcset="/images/rings.jpg 1x, /images/rings.jpg 2x"
365+
data-src="/images/rings.jpg"
366+
data-srcset="/images/rings.jpg 1x, /images/rings.jpg 2x" alt="No Metric to Rule Them All: Toward Principled Evaluations of Graph-Learning Datasets" />
367+
</picture>
368+
369+
<div class="project__info">
370+
<h4>No Metric to Rule Them All: Toward Principled Evaluations of Graph-Learning Datasets</h4>
371+
<span><em>Preprint 2025</em></span>
372+
<p>Benchmark datasets have proved pivotal to the success of graph learning, and good benchmark datasets are crucial to guide the development of the field. Recent research has highlighted problems with graph-learning datasets and benchmarking practices -- revealing, for example, that methods which ignore the graph structure can outperform graph-based approaches on popular benchmark datasets. Such findings raise two questions: (1) What makes a good graph-learning dataset, and (2) how can we evaluate dataset quality in graph learning? Our work addresses these questions. As the classic evaluation setup uses datasets to evaluate models, it does not apply to dataset evaluation. Hence, we start from first principles. Observing that graph-learning datasets uniquely combine two modes -- the graph structure and the node features -- , we introduce RINGS, a flexible and extensible mode-perturbation framework to assess the quality of graph-learning datasets based on dataset ablations -- i.e., by quantifying differences between the original dataset and its perturbed representations. Within this framework, we propose two measures -- performance separability and mode complementarity -- as evaluation tools, each assessing, from a distinct angle, the capacity of a graph dataset to benchmark the power and efficacy of graph-learning methods. We demonstrate the utility of our framework for graph-learning dataset evaluation in an extensive set of experiments and derive actionable recommendations for improving the evaluation of graph-learning methods. Our work opens new research directions in data-centric graph learning, and it constitutes a first step toward the systematic evaluation of evaluations. Code will be released soon :-)</p>
373+
<div class="project__buttons">
374+
<a href="https://github.com/aidos-lab?q=&amp;type=all&amp;language=&amp;sort=" class="btn btn-primary btn-social" target="_blank">
375+
<i class="icon-github-line"></i> Code
376+
</a>
377+
<a href="https://arxiv.org/abs/2502.02379" class="btn btn-primary btn-social" target="_blank">
378+
<i class="icon-profile-fill"></i> Paper
379+
</a>
380+
</div>
381+
</div>
382+
</div>
383+
</div>
384+
385+
<div class="col-12 mb-5">
386+
<div class="project">
387+
388+
389+
390+
391+
392+
357393
<picture>
358394
<source srcset="/images/hsa_networks_hu12688522630341836570.jpg 1x, /images/hsa_networks_hu2669093310046598025.jpg 2x"
359395
type="image/webp" />
Loading
Loading

0 commit comments

Comments
 (0)