Skip to content

Commit

Permalink
Merge pull request #181 from erodola/new-pubs
Browse files Browse the repository at this point in the history
added several new publications
  • Loading branch information
erodola authored Jul 6, 2024
2 parents 7b6c9ac + 41acaad commit ff1855a
Show file tree
Hide file tree
Showing 62 changed files with 1,107 additions and 30 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,5 @@ jsconfig.json
node_modules/
go.sum
.hugo_build.lock

istruzioni.md
9 changes: 9 additions & 0 deletions content/publication/baieri-2024-arap/cite.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
@misc{baieri-2024-arap,
title={Implicit-ARAP: Efficient Handle-Guided Deformation of High-Resolution Meshes and Neural Fields via Local Patch Meshing},
author={Daniele Baieri and Filippo Maggioli and Zorah Lähner and Simone Melzi and Emanuele Rodol\`a},
year={2024},
eprint={2405.12895},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2405.12895},
}
Binary file added content/publication/baieri-2024-arap/featured.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
49 changes: 49 additions & 0 deletions content/publication/baieri-2024-arap/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
# Documentation: https://wowchemy.com/docs/managing-content/

title: 'Implicit-ARAP: Efficient Handle-Guided Deformation of High-Resolution Meshes and Neural Fields via Local Patch Meshing'
subtitle: ''
summary: ''
authors:
- baieri
- maggioli
- Zorah Laehner
- melzi
- rodola
tags: []
categories: []
date: '2024-05-21'
lastmod: 2023-10-02T:26:44
featured: false
draft: false
publication_short: "Preprint"

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: 'Center'
preview_only: false

# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2023-10-02T:26:44'
publication_types:
- '3'
abstract: "In this work, we present the local patch mesh representation for neural signed distance fields. This technique allows to discretize local regions of the level sets of an input SDF by projecting and deforming flat patch meshes onto the level set surface, using exclusively the SDF information and its gradient. Our analysis reveals this method to be more accurate than the standard marching cubes algorithm for approximating the implicit surface. Then, we apply this representation in the setting of handle-guided deformation: we introduce two distinct pipelines, which make use of 3D neural fields to compute As-Rigid-As-Possible deformations of both high-resolution meshes and neural fields under a given set of constraints. We run a comprehensive evaluation of our method and various baselines for neural field and mesh deformation which show both pipelines achieve impressive efficiency and notable improvements in terms of quality of results and robustness. With our novel pipeline, we introduce a scalable approach to solve a well-established geometry processing problem on high-resolution meshes, and pave the way for extending other geometric tasks to the domain of implicit surfaces via local patch meshing."
publication: '*arXiv preprint*'
links:
- name: arXiv
url : https://arxiv.org/abs/2405.12895
- name: PDF
url: https://arxiv.org/pdf/2405.12895
- icon: github
icon_pack: fab
name: 'GitHub'
url: https://github.com/daniele-baieri/implicit-arap
---
Binary file not shown.
7 changes: 7 additions & 0 deletions content/publication/bonzi-2023-voice/cite.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
@article{bonzi-2023-voice,
author={Bonzi, Francesco and Mancusi, Michele and Deo, Simone Del and Melucci, Pierfrancesco and Tavella, Maria Stella and Parisi, Loreto and Rodol\`a, Emanuele},
booktitle={2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)},
title={Exploiting Music Source Separation For Singing Voice Detection},
year={2023},
doi={10.1109/MLSP55844.2023.10285863}
}
Binary file added content/publication/bonzi-2023-voice/featured.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
45 changes: 45 additions & 0 deletions content/publication/bonzi-2023-voice/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
# Documentation: https://wowchemy.com/docs/managing-content/

title: Exploiting Music Source Separation For Singing Voice Detection
subtitle: ''
summary: ''
authors:
- Francesco Bonzi
- mancusi
- Simone Del Deo
- Pierfrancesco Melucci
- Maria Stella Tavella
- Loreto Parisi
- rodola
tags:
- 'source separation'
- 'audio'
categories: []
date: '2023-09-01'
lastmod: 2023-12-16T10:57:52+01:00
featured: false
draft: false
publication_short: "MLSP"

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: ''
preview_only: false

# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2023-02-05T09:57:52.156096Z'
abstract: "Singing voice detection (SVD) is an essential task in many music information retrieval (MIR) applications. Deep learning methods have shown promising results for SVD, but further performance improvements are desirable since it underlies many other tasks. This work proposes a novel SVD system combining a state-of-the-art music source separator (Demucs) with two downstream models: Long-term Recurrent Convolutional Network (LRCN) and a Transformer network. Our work highlights two main aspects: the impact of a music source separation model, such as Demucs, and its zero-shot capabilities for the SVD task; and the potential for deep learning to improve the system’s performance further. We evaluate our approach on three datasets (Jamendo Corpus, MedleyDB, and MIR-IK) and compare the performance of the two models to a baseline root mean square (RMS) algorithm and the current state-of-the-art for the Jamendo Corpus dataset."
publication: '*International Workshop on Machine Learning for Signal Processing 2023*'
links:
- name: URL
url: https://ieeexplore.ieee.org/document/10285863
---
6 changes: 6 additions & 0 deletions content/publication/camoscio-23/cite.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
@inproceedings{camoscio-23,
title = {Camoscio: an Italian Instruction-tuned LLaMA},
author = {Andrea Santilli and Emanuele Rodol{\`a}},
booktitle = {Proc. CLiC-it},
year = {2023},
}
Binary file added content/publication/camoscio-23/featured.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
54 changes: 54 additions & 0 deletions content/publication/camoscio-23/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
# Documentation: https://wowchemy.com/docs/managing-content/

title: "Camoscio: an Italian Instruction-tuned LLaMA"
subtitle: ''
summary: ''
authors:
- santilli
- rodola

tags:
- LLM

categories: []
date: '2023-12-18'
lastmod: 2022-09-30T11:32:00+02:00
featured: false
draft: false
publication_short: "CLiC-it 2023"

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
# image:
# caption: ''
# focal_point: 'Center'
# preview_only: false

links:
- name: PDF
url: https://ceur-ws.org/Vol-3596/paper44.pdf
- icon: github
icon_pack: fab
name: 'GitHub'
url: https://github.com/teelinsan/camoscio
- icon: award
icon_pack: fas
name: 'Best Student Paper Award'
url: https://clic2023.ilc.cnr.it/awards/

# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2022-04-28T10:30:59.888843Z'
publication_types:
- '1'

abstract: "In recent years Large Language Models (LLMs) have increased the state of the art on several natural language processing tasks. However, their accessibility is often limited to paid API services, posing challenges for researchers in conducting extensive investigations. On the other hand, while some open-source models have been proposed by the community, they are typically English-centric or multilingual without a specific adaptation for the Italian language. In an effort to democratize the available and open resources for the Italian language, in this paper we introduce Camoscio: a language model specifically tuned to follow users' prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA (7b) with LoRA on a corpus of instruction prompts translated to Italian via ChatGPT. Results indicate that the model's zero-shot performance on various downstream tasks in Italian competes favorably with existing models specifically finetuned for those tasks. All the artifacts (code, dataset, model) are released to the community."

publication: '*Italian Conference on Computational Linguistics (CLiC-it 2023)*'
---
6 changes: 6 additions & 0 deletions content/publication/cannistraci-2023-charts/cite.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
@inproceedings{cannistraci-2023-charts,
title={From Charts to Atlas: Merging Latent Spaces into One},
author={Donato Crisostomi and Irene Cannistraci and Luca Moschella and Pietro Barbiero and Marco Ciccone and Pietro Li\`o and Emanuele Rodol\`a},
year={2023},
booktitle={Proc. NeurReps},
}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
46 changes: 46 additions & 0 deletions content/publication/cannistraci-2023-charts/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
# Documentation: https://wowchemy.com/docs/managing-content/

title: 'From Charts to Atlas: Merging Latent Spaces into One'
subtitle: ''
summary: ''
authors:
- crisostomi
- cannistraci
- moschella
- Pietro Barbiero
- Marco Ciccone
- Pietro Lio
- rodola
tags:
- 'Model merging'
categories: []
date: '2023-11-29'
lastmod: 2023-12-16T10:57:52+01:00
featured: false
draft: false
publication_short: "NeuReps 2023"

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: ''
preview_only: false

# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2023-02-05T09:57:52.156096Z'
abstract: "Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent spaces to create a unified space encompassing the combined information. To this end, we introduce Relative Latent Space Aggregation (RLSA), a two-step approach that first renders the spaces comparable using relative representations, and then aggregates them via a simple mean. We carefully divide a classification problem into a series of learning tasks under three different settings: sharing samples, classes, or neither. We then train a model on each task and aggregate the resulting latent spaces. We compare the aggregated space with that derived from an end-to-end model trained over all tasks and show that the two spaces are similar. We then observe that the aggregated space is better suited for classification, and empirically demonstrate that it is due to the unique imprints left by task-specific embedders within the representations. We finally test our framework in scenarios where no shared region exists and show that it can still be used to merge the spaces, albeit with diminished benefits over naive merging."
publication: '*NeurReps Workshop 2023*'
links:
- name: URL
url: https://openreview.net/forum?id=ZFu7CPtznY
- name: PDF
url: https://openreview.net/pdf?id=ZFu7CPtznY
---
16 changes: 11 additions & 5 deletions content/publication/cannistraci-2023-infusing/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ authors:
- rodola
tags: []
categories: []
date: '2023-10-02'
date: '2024-06-02'
lastmod: 2023-10-02T:26:44
featured: false
draft: false
publication_short: ""
publication_short: "ICLR 2024"

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
Expand All @@ -36,8 +36,14 @@ publishDate: '2023-10-02T:26:44'
publication_types:
- '3'
abstract: 'It has been observed that representations learned by distinct neural networks conceal structural similarities when the models are trained under similar inductive biases. From a geometric perspective, identifying the classes of transformations and the related invariances that connect these representations is fundamental to unlocking applications, such as merging, stitching, and reusing different neural modules. However, estimating task-specific transformations a priori can be challenging and expensive due to several factors (e.g., weights initialization, training hyperparameters, or data modality). To this end, we introduce a versatile method to directly incorporate a set of invariances into the representations, constructing a product space of invariant components on top of the latent representations without requiring prior knowledge about the optimal invariance to infuse. We validate our solution on classification and reconstruction tasks, observing consistent latent similarity and downstream performance improvements in a zero-shot stitching setting. The experimental analysis comprises three modalities (vision, text, and graphs), twelve pretrained foundational models, eight benchmarks, and several architectures trained from scratch.'
publication: '*arXiv preprint arXiv:2206.06182*'
publication: '*International Conference on Learning Representations (ICLR 2024)*'
links:
- name: URL
url : https://arxiv.org/abs/2310.01211
---
url : https://openreview.net/forum?id=vngVydDWft
- name: PDF
url: https://openreview.net/pdf?id=vngVydDWft
- icon: award
icon_pack: fas
name: 'ICLR 2024 spotlight'
url: https://iclr.cc/virtual/2024/poster/17521
---
9 changes: 9 additions & 0 deletions content/publication/ciranni-2024-cocola/cite.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
@misc{ciranni-2024-cocola,
title={COCOLA: Coherence-Oriented Contrastive Learning of Musical Audio Representations},
author={Ruben Ciranni and Emilian Postolache and Giorgio Mariani and Michele Mancusi and Luca Cosmo and Emanuele Rodol\`a},
year={2024},
eprint={2404.16969},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2404.16969},
}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
50 changes: 50 additions & 0 deletions content/publication/ciranni-2024-cocola/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
# Documentation: https://wowchemy.com/docs/managing-content/

title: 'COCOLA: Coherence-Oriented Contrastive Learning of Musical Audio Representations'
subtitle: ''
summary: ''
authors:
- Ruben Ciranni
- postolache
- mariani
- mancusi
- cosmo
- rodola
tags: []
categories: []
date: '2024-04-29'
lastmod: 2023-10-02T:26:44
featured: false
draft: false
publication_short: "Preprint"

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: 'Center'
preview_only: false

# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2023-10-02T:26:44'
publication_types:
- '3'
abstract: "We present COCOLA (Coherence-Oriented Contrastive Learning for Audio), a contrastive learning method for musical audio representations that captures the harmonic and rhythmic coherence between samples. Our method operates at the level of stems (or their combinations) composing music tracks and allows the objective evaluation of compositional models for music in the task of accompaniment generation. We also introduce a new baseline for compositional music generation called CompoNet, based on ControlNet, generalizing the tasks of MSDM, and quantify it against the latter using COCOLA. We release all models trained on public datasets containing separate stems (MUSDB18-HQ, MoisesDB, Slakh2100, and CocoChorales)."
publication: '*arXiv preprint*'
links:
- name: arXiv
url : https://arxiv.org/abs/2404.16969
- name: PDF
url: https://arxiv.org/pdf/2404.16969
- icon: github
icon_pack: fab
name: 'GitHub'
url: https://github.com/gladia-research-group/cocola
---
8 changes: 8 additions & 0 deletions content/publication/cosmo-2024-kernel/cite.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
@article{cosmo-2024-kernel,
author = {Cosmo, Luca and Minello, Giorgia and Bicciato, Alessandro and Bronstein, Michael and Rodolà, Emanuele and Rossi, Luca and Torsello, Andrea},
title = {Graph Kernel Neural Networks},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
year = {2024},
abstract = {The convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this article, we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similar to what happens for convolutional masks in traditional convolutional neural networks (CNNs). We perform an extensive ablation study to investigate the model hyperparameters’ impact and show that our model achieves competitive performance on standard graph classification and regression datasets.}
}

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
48 changes: 48 additions & 0 deletions content/publication/cosmo-2024-kernel/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
# Documentation: https://wowchemy.com/docs/managing-content/

title: 'Graph Kernel Neural Networks'
subtitle: ''
summary: ''
authors:
- cosmo
- Giorgia Minello
- Alessandro Bicciato
- Michael Bronstein
- rodola
- Luca Rossi
- Andrea Torsello
tags:
- 'Graph learning'
categories: []
date: '2024-05-01'
lastmod: 2023-02-05T10:57:53+01:00
featured: false
draft: false
publication_short: "TNNLS"

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: ''
preview_only: false

# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2023-02-05T09:57:52.951304Z'
publication_types:
- '2'
abstract: "The convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this article, we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similar to what happens for convolutional masks in traditional convolutional neural networks (CNNs). We perform an extensive ablation study to investigate the model hyperparameters’ impact and show that our model achieves competitive performance on standard graph classification and regression datasets."
publication: '*IEEE Transactions on Neural Networks and Learning Systems*'
links:
- name: URL
url: https://ieeexplore.ieee.org/document/10542111
- name: arXiv
url: https://arxiv.org/abs/2112.07436
---
Loading

0 comments on commit ff1855a

Please sign in to comment.