Replies: 31 comments 34 replies
-
Howzit, my name is Shaun. Currently working as an ML developer working mainly in computer vision but keen to start working/learning in some other areas and this seem like an exciting and challenging opportunity to get stuck in. |
Beta Was this translation helpful? Give feedback.
-
Hi all my name's satish saini , Currently I'm pursuing B.Tech from Moradabad Institute of technology.I want to be a ML developer . |
Beta Was this translation helpful? Give feedback.
-
Hi! My name is Jose Montoro. I'm an old friend of @bs from our days of going to yoga to the same place in SF! Hi Britt, it's been a while! 👋 It was through Britt that I first found out about ESP, and I found it fascinating. I love the mission and the approach. I'm currently a Language Engineer at Apple with a background in Linguistics. I'm very interested in all the new approaches to understanding language that technology provides. What ESP is trying to accomplish is both an incredibly stimulating intellectual exercise and a very powerful take on probably the most pressing issue of our time. That is to say, I'd love to contribute! I read in the comments above that the ESP library might be a good place to start. Is that still the case? Is there anything in particular that would help the project right now? Maybe a general exploration/visualization/summarization of part the ESP library data would still be useful? Nice to meet you all! |
Beta Was this translation helpful? Give feedback.
-
Hello! My name is Celena and I discovered this project from the "Two Heartbeats A Minute" episode on Invisibilia. I'm a newb to comp sci but sociolinguistics has been a hobby of mine. This project is fascinating and makes me think of Dr. Dolittle (if anyone remembers that movie)! I'm a little overwhelmed and intimidated about the knowledge and skills behind a project like this but I hope there is a way I can contribute. If anything, I can atleast learn from you all! |
Beta Was this translation helpful? Give feedback.
-
Hi all! |
Beta Was this translation helpful? Give feedback.
-
Hello Friends. I am a biological oceanographer studying marine mammal communication and human noise impacts in the ocean. Glad and grateful to connect. |
Beta Was this translation helpful? Give feedback.
-
Hi and, more important than my introduction, thank you for what you are doing! I sat through the recent webinar and was humbled by the expertise but excited to see that the application of your research is approachable to citizen scientists like myself. My interest in this effort comes from three aspects of my background. First, a PhD in systemic functional linguistics (a very non-Chomskian approach to language modeling) in the 90's prior to ML/AI being used in that field. Second, a 25 year career in enterprise software tech and more lately in hardware suppliers of various sensors/MCUs used in the IoT market. Third, 50 years of living in the Greater Yellowstone ecosystem, and within the past 10 years studying animal behavior as a citizen-scientist. I am not a computer scientist with the fabulous skills of Aza and others, but I understand how to bring scalable products to market within the tech world. Several of my counterparts and I have used those relationships to launch AI/ML hardware projects that bring together researchers and computer scientists building models that can used to study animal communication. An example of this is https://www.hackster.io/contests/ElephantEdge -- pairing cutting edge smart collars (with audio input) with AI models across various species. My company is also working with Microsoft as they launch their new project Santa Cruz hardware for vision/audio AI on the edge use cases. Currently, we are using this research in the Greater Yellowstone to create models of cattle behavior that can predict grizzly and wolf predation events so as to help deter such events that result in euthanization. We are also deploying AI Vision Edge "trail cameras" with long battery life to record and signal unique animal behavior and reduce false positives for researchers. I really look forward to taking whatever I learn from the Earth Species science and helping bridge the gap to research being done around Yellowstone National Park to bring further awareness of the multi-modal communication used by species throughout this unique ecosystem. In particular, there is a veritable army of "citizens" who go through Yellowstone Park with recording devices on their cell phones to their commercial video platforms who could be lobbied to contribute vocalizations they capture while vacationing in the area. Even better, there is a more focused army of locals (many with Masters and PhDs in relevant scientific fields) who could be utilized to get more targeted vocalizations (with video often) via a coordinated effort...after all, they live here to see this native ecosystem at work and are out gathering data or helping formal researchers daily. I've already shared your work with several of them in Gardiner MT and we're discussing which species we might focus on first. |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Hi Everyone, I am Shivam Gupta, a junior undergrad at IIT Kanpur. I am exploring research in deep learning and am right now working around some brain inspired algorithms. I also have very recently started volunteering for several data science for social good nonprofits. At my university I also lead a club which provides machine learning solutions to non-profits, all pro-bono. I have a little backgroud in NLP but I am always keen to learn and explore new research areas. I would like to contribute to this project. |
Beta Was this translation helpful? Give feedback.
-
Hi, My name is Sam Joy. I am a data scientist doing work in the field of NLP and deep learning. I have just started my AI journey professionally after completing my graduation with a Masters in Computer Science. I got to know of this project after having listened to the podcast episode "Two Heartbeats A Minute". I must say this project is truly inspiring and impactful. I am hope to provide any contributions that I can to this wonderful project. |
Beta Was this translation helpful? Give feedback.
-
Hello! Thanks in advance and best of luck! |
Beta Was this translation helpful? Give feedback.
-
Hey! My name is Andrew, and I'm a nearly-graduated PhD student in machine learning at Harvard. Although this isn't really my area of expertise, I wanted to share a slightly crazy idea I have for collecting a small amount (or potentially even a large amount) of supervised animal vocalization data, i.e. vocalization data that we know pertains to specific, well-defined concepts. My sense is that having that kind of data could really help even if the bulk of the work is done in an unsupervised setting. For example, if you had a set of vocalizations for the concept of "large" and a set of vocalizations for the concept of "small," you might be able to also label a direction in embedding space, which you could then re-use to help analyze (differences between) embeddings of unknown meaning. Would be curious to hear what you think! |
Beta Was this translation helpful? Give feedback.
-
One of the big remaining questions in animal linguistics/communication theory is “do they have enough semantic domains to actually create directions in an embedded space that can be mapped to human languages”. In my opinion, it’s the 60T question. If we just start with birds, where the genetics of the FOXP2 gene seems to suggest a common evolution to basic vocalizations, and we just start with what we know about three common functions of bird calls: alarm calls, attraction calls (songs), and contact calls…AND WE DECIDE BETWEEN A WORD AND A SYLABLE, then can you brilliant ML folks create an embedded space that maps to a known human language and create a translation tool. Researchers know enough about a black-capped chickadee alarm call to tease out which part of the “sentence” are “words” that indicate the type of species that is threatening them (e.g. pigmy owl vs great horned owl), but we think this is due to a repetition of syllables rather than a different spectrogram. In other words, the number of calls indicates the meaning in the case of a species threat, not the sound of the call. That said, the “dee” portion of a chick-a-dee three or four note/word alarm “sentence” definitely seems to be associated with the type of threat. And perhaps with your embedded space we can figure out what the rest of the words in that same sentence mean (e.g. location of threat?).
Or, perhaps better than starting with birds, start with Con Slobodovich’s research on prairie dogs (which has even identified dialects between species that could be used to ground truth your embedded spaces), which is sophisticated enough to create an embedded space which they might be able to help those same researchers figure out some of the other vocalizations a prairied dog makes that we have no clue on.
Anyway, great to have you aboard. I still believe this project is the best hope we have at creating a grammar of Chickadees…or Prairie Dogs…or…
From: Andrew Ross ***@***.***>
Sent: Wednesday, March 31, 2021 10:02 AM
To: earthspecies/project ***@***.***>
Cc: Jeff Reed ***@***.***>; Comment ***@***.***>
Subject: Re: [earthspecies/project] ✨ Welcome: Introduce Yourself Here ✨ (#22)
Hey! My name is Andrew<https://asross.github.io/>, and I'm a nearly-graduated PhD student in machine learning at Harvard.
Although this isn't really my area of expertise, I wanted to share a slightly crazy idea<https://docs.google.com/document/d/18Wpv7idS1khLDRH3CDSE2aZNy5PziGvz6Oyqtc2CZrQ/edit?usp=sharing> I have for collecting a small amount (or potentially even a large amount) of supervised animal vocalization data, i.e. vocalization data that we know pertains to specific, well-defined concepts. My sense is that having that kind of data could really help even if the bulk of the work is done in an unsupervised setting. For example, if you had a set of vocalizations for the concept of "large" and a set of vocalizations for the concept of "small," you might be able to also label a direction in embedding space, which you could then re-use to help analyze (differences between) embeddings of unknown meaning.
Would be curious to hear what you think!
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#22 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AQ5A2DDEAW44LLWKMAWPPH3TGNBQNANCNFSM4QNYF5YQ>.
|
Beta Was this translation helpful? Give feedback.
-
Hello, I am Rahul, Nice to be here. Very excited about the mission. I am an Engineer with more than 12 years of experience, moving from IT to Machine learning. |
Beta Was this translation helpful? Give feedback.
-
Hi I'm a computational social scientist, and stumbled on this project listening to a podcast on a road trip on an annual rafting trip with some old science-ey friends. Out of curiosity, are you all thinking about playing various sounds back to their species to try to better understand what they mean through observed behavior? The umwelt issues in this inspiring initiative will be quite tricky to tackle! Also by the way this link to the cocktail party problem in the Project Readme is broken :/ Cheers, PA |
Beta Was this translation helpful? Give feedback.
-
Hi! My name is Colin. I'm a machine learning engineer working with audio models that run on tiny battery powered devices (microcontrollers) for a startup here in Sweden. I recently participated in two birdcall recognition competitions on Kaggle, which opened my eyes to bioacoustics. I found out about ESP from @radekosmulski on Twitter. I find the project absolutely fascinating and can't wait to dive deeper when I get the time! |
Beta Was this translation helpful? Give feedback.
-
Hey everybody! I'm a data scientist in nonprofit healthcare and communication, and a buddy of animals. I've just discovered your magnificent project. I'm stoked to dig in and learn and hope to contribute! PS: I don't know enough about the ESP methodology & toolbox yet to know if this is relevant, so I'm sorry if it proves off topic. But I wanted to share a study that made a splash in the medical literature today: Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria (study, and a nice lay summary in STAT News). They used DL models, a language model, and a Viterbi decoder to detect words and then predict sentences in real time from the brain signals of a man who'd lost the physical ability to speak. Only N=1 and not bridging species, but caught my eye while thinking about ESP. (Aside from the prospect of relieving a lot of human suffering, if this line of research pans out. Which stirs my heart, man.) |
Beta Was this translation helpful? Give feedback.
-
Hello. My name is Melody and I am currently an art practice PhD in virtual reality and zoosemiotics. It is wonderful to discover your project. I am specifically interested in the umwelt and worlding of animals and am trying to creatively visualize the voices of animals in creative social virtual worlds and also using VR tools to make sculptures inspired by different animal languages. Mine is really a creative approach - more to open up thinking about meaning in non-human voices. But I am writing about current research in this field - like yours. Thanks for your work. So exciting. This is my site: https://melodyowen.net |
Beta Was this translation helpful? Give feedback.
-
Hi all, i am a random hacker, with 20+ years of work experience in IT, who became fascinated by the diversity and complexity of beluga whale vocalizations. I stumbled over Watkins marine mammal sound database and have been working to make that dataset more accessible for researchers. (Just a personal open source side project, i am not associated with any institution.) The projects goals are outlined on the website: https://marine-mammal.soundwave.cl/about.html Watkins database has some good potential to be used in the research of non-human languages. If you guys have any ideas, applications or tasks, i would be happy to help with the technical side of things. |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Hi, I am Aditya and I currently work as a ML engineer in a Startup in Finland. The goal of Earth Species Project truly resonates with me and would like to contribute to any open source projects you are undertaking. What are the projects you folks working on currently? |
Beta Was this translation helpful? Give feedback.
-
Hello! My name is Anna and I have a background in animal cognition and have consulted for AI modeling research projects. I also have a Ph.D. in neuroscience and decision-making. I would love to join your company and have just submitted an application. Thank you for your time and attention to my application and for doing such great work in this field and for this world! |
Beta Was this translation helpful? Give feedback.
-
Hi all, I'm Kathleen, a computational biologist by training with expertise in comparative and functional genomics currently working to translate evolutionary innovations into human therapeutics. I'm very intrigued and inspired by the mission of ESP and keen to get involved. Currently planning to explore the github to see which projects are most active and how I might contribute. Cheers! |
Beta Was this translation helpful? Give feedback.
-
Hello everyone, I'm Julien, researcher and data scientist with a PhD in neuroscience. I love and have substantial experience in animal cognition and have written scientific papers in this field. ESP's mission to decode the language of animals captivates me, and I'm eager to contribute my expertise and channel my energy into leading this incredible journey to resounding success. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone, I am a field guide in South Africa and I interpret animal behavior for international guests on Safari in Africa. I am passionate about nature and find the communication systems between plants animals and humans fascinating. I'm looking forward to learning more from you all in this thread. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone! Great to be part of the Earth Species Project Discord chat. I have been researching and experimenting with multispecies education for over two decades now. It is so exciting to see developments in technology that hold the power to deepen our understanding of non-human species - and enable us to access their perspectives. These multispecies perspectives will be critical in guiding our efforts to solve global sustainability challenges. It’s a shared planet. Looking forward to exchanging ideas with you all and learning together as part of this amazing project! |
Beta Was this translation helpful? Give feedback.
-
Hello everyone! I'm Emmanuel Fernandez, currently collaborating with the LISSE lab in Quebec to study beluga whale vocalizations using deep learning. My research focuses on detecting and classifying these calls with CNNs and a fine-tuned AVES model. As a Data Science and AI graduate with a focus on NLP, I find your research into understanding the meaning behind these vocalizations absolutely fascinating. If there's any way I can contribute to your projects, whether big or small, I'd love to help out. Just let me know! |
Beta Was this translation helpful? Give feedback.
-
Hi folks! I may have met you in the Discord or community calls. I'm interested in ESP because I believe intraspecies understanding can help humans navigate our insufficient commitment to planetary coexistence. As for me: I connect people, ideas, and technologies to make the world better, most recently working as the Director of Communications & Community at Creative Commons and now independently at Nudgital. I’ve worked across a wide variety of public and private institutions, focusing on community development, digital communications, meaningful education, open technologies, and sustainable growth. I live in Portland, Oregon USA among a lot of crows and with some other cats and humans. Learn more about me on my blog. |
Beta Was this translation helpful? Give feedback.
-
Hi Nate!
Welcome and nice to meet you.
Kindly,
Muria Roberts
Founder/Director, Multispecies Education International MEI
…On Sat, Jan 4, 2025 at 2:09 pm, Nate Angell ***@***.***(mailto:On Sat, Jan 4, 2025 at 2:09 pm, Nate Angell <<a href=)> wrote:
Hi folks! I may have met you in the Discord or community calls. I'm interested in ESP because I believe intraspecies understanding can help humans navigate our insufficient commitment to planetary coexistence. As for me: I connect people, ideas, and technologies to make the world better, most recently working as the Director of Communications & Community at Creative Commons and now independently at [Nudgital](https://nudgital.com/). I’ve worked across a wide variety of public and private institutions, focusing on community development, digital communications, meaningful education, open technologies, and sustainable growth. I live in Portland, Oregon USA among a lot of crows and with some other cats and humans. Learn more about me on [my blog](https://xolotl.org).
—
Reply to this email directly, [view it on GitHub](#22 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/A3VPJRG2CAJADELVV4OQ5432I5GHPAVCNFSM4QNYF5Y2U5DIOJSWCZC7NNSXTOSENFZWG5LTONUW63SDN5WW2ZLOOQ5TCMJXGMYDQOJW).
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi all! |
Beta Was this translation helpful? Give feedback.
-
Hello everyone, and welcome to the ESP Discussions intro thread. Please introduce yourself 🤖
Beta Was this translation helpful? Give feedback.
All reactions