Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
Simonlee711 committed Jun 23, 2024
1 parent 8ea367e commit d523deb
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions MEME/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -167,14 +167,14 @@ <h2 class="title is-3">Abstract</h2>
<div class="container">
<div class="columns is-centered">
<div class="column is-full-width">
<h1 class="title is-2" style="text-align: center; padding-bottom: 10px;">Extracting Affordances Without Extra Annotations</h1>
<h1 class="title is-2" style="text-align: center; padding-bottom: 10px;">Transforming Electronic Health Records into text</h1>
<!-- <h2 class="subtitle is-4" style="text-align: center;">Defining Visual Affordances</h2> -->

<div class="columns is-centered has-text-centered">
<div class="column is-two-thirds ">
<div class="content has-text-justified">
<p>
Robot-centric Affordances: While visual affordances have been studied extensively in computer vision, applying them to robotics needs smart tweaks. As we learn from large-scale human videos but apply to robots, we seek an agent-agnostic affordance to facilitate this transfer. Therefore, we define affordance as:
One of our main contributions in our methodology is turning tabular Electronic Health Records (EHR) data into text
</p>
</div>
</div>
Expand Down Expand Up @@ -645,11 +645,12 @@ <h3 class="title is-3" style="text-align: center; padding-bottom: 10px;"><strong
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="titile">BibTeX</h2>
<pre><code>@inproceedings{bahl2023affordances,
title={Affordances from Human Videos as a Versatile Representation for Robotics},
author={Bahl, Shikhar and Mendonca, Russell and Chen, Lili and Jain, Unnat and Pathak, Deepak},
journal={CVPR},
year={2023}
<pre><code>@misc{lee2024multimodal,
title={Multimodal Clinical Pseudo-notes for Emergency Department Prediction Tasks using Multiple Embedding Model for EHR (MEME)},
author={Lee, Simon A and Jain, Sujay and Chen, Alex and Biswas, Arabdha and Fang, Jennifer and Rudas, Akos and Chiang, Jeffrey N},
journal={arXiv preprint arXiv:2402.00160},
year={2024}

}</code></pre>
</div>
</section>
Expand All @@ -663,8 +664,7 @@ <h2 class="titile">Acknowledgements</h2>
<div class="is-vcentered interpolation-panel">
<div class="container content">
<p style = "font-size: 16px">
We thank Shivam Duggal, Yufei Ye and Homanga Bharadhwaj for fruitful discussions and are grateful to Shagun Uppal, Ananye Agarwal, Murtaza Dalal and Jason Zhang for comments on early drafts of this paper. RM, LC, and DP are supported by NSF IIS-2024594, ONR MURI N00014-22-1-2773 and ONR N00014-22-1-2096.
</p>
We thank Arabdha Biswas for helping us run a random forest model. </p>
</div>
</div>
</div>
Expand Down

0 comments on commit d523deb

Please sign in to comment.