Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

contact

publications

Cache Miss Rate Predictability via Neural Networks

Published in NeurIPS 2018 Workshop on ML in Systems, 2018

[PDF]

Abstract

A program run, in the setting of computer architecture and compilers, can be characterized in part by its memory access patterns. We approach the problem of analyzing these patterns using machine learning. We characterize memory accesses using a sequence of cache miss rates, and present a new data set for this task. The data set draws from programs run on various Java virtual machines, and C and Fortran compilers. We work towards answering the scientific question: How predictable is a program’s cache miss rate from interval to interval as it executes? We report the results of three distinct ANN models, which have been shown to be effective in sequence modeling. We show that programs can be differentiated in terms of the predictability of their cache miss rates.

Recommended citation: Rishikesh Jha, Saket Tiwari, Arjun Kuravally, Eliot Moss

Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks

Published in , 2020

[PDF]

Abstract

Self-supervised pre-training of transformer models has shown enormous success in improving performance on a number of downstream tasks. However, fine-tuning on a new task still requires large amounts of taskspecific labelled data to achieve good performance. We consider this problem of learning to generalize to new tasks with few examples as a meta-learning problem. While meta-learning has shown tremendous progress in recent years, its application is still limited to simulated problems or problems with limited diversity across tasks. We develop a novel method, LEOPARD, which enables optimization-based meta-learning across tasks with different number of classes, and evaluate existing methods on generalization to diverse NLP classification tasks. LEOPARD is trained with the state-of-the-art transformer architecture and shows strong generalization to tasks not seen at all during training, with as few as 8 examples per label. On 16 NLP datasets, across a diverse task-set such as entity typing, relation extraction, natural language inference, sentiment analysis, and several other text categorization tasks, we show that LEOPARD learns better initial parameters for few-shot learning than self-supervised pretraining or multi-task training, outperforming many strong baselines, for example, increasing F1 from 49% to 72%.

Recommended citation: Rishikesh Jha, Trapit Bansal, Andrew McCallum

Emission-aware Energy Storage Scheduling for a Greener Grid

Published in , 2020

[PDF]

Abstract

Abstract—Reducing our reliance on carbon-intensive energy sources is vital for reducing the carbon footprint of the electric grid. Although the grid is seeing increasing deployments of clean, renewable sources of energy, a significant portion of the grid demand is still met using traditional carbon-intensive energy sources. In this paper, we study the problem of using energy storage deployed in the grid to reduce the grid’s carbon emissions. While energy storage has previously been used for grid optimizations such as peak shaving and smoothing intermittent sources, our insight is to use distributed storage to enable utilities to reduce their reliance on their less efficient and most carbon-intensive power plants and thereby reduce their overall emission footprint. We formulate the problem of emission-aware scheduling of distributed energy storage as an optimization problem, and use a robust optimization approach that is well-suited for handling the uncertainty in load predictions and intermittent renewables. We evaluate our approach using a state of the art neural network load forecasting technique and real load traces from a distribution grid. Our results show a reduction of >0.5 million kg in annual carbon emissions — equivalent to a drop of 23.3% in our electric grid emissions.

Recommended citation: Rishikesh Jha Stephen Lee, Srinivasan Iyengar, Mohammad Hajiesmali, Prashant Shenoy, David Irwin

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.