diff --git a/docs/about.md b/docs/about.md
index 4069bd80e1..d9cb584b35 100644
--- a/docs/about.md
+++ b/docs/about.md
@@ -2,15 +2,10 @@
-FINN is an
-experimental framework from Xilinx Research Labs to explore deep neural network
-inference on FPGAs.
-It specifically targets quantized neural
-networks, with emphasis on
-generating dataflow-style architectures customized for each network.
-It is not
-intended to be a generic DNN accelerator like xDNN, but rather a tool for
-exploring the design space of DNN inference accelerators on FPGAs.
+FINN is an ML framework by the Integrated Communications and AI Lab of AMD Research & Advanced Development.
+It provides an end-to-end flow for the exploration and implementation of quantized neural network inference solutions on FPGAs.
+FINN generates dataflow architectures as a physical representation of the implemented custom network in space.
+It is not a generic DNN acceleration solution but relies on co-design and design space exploration for quantization and parallelization tuning so as to optimize a solutions with respect to resource and performance requirements.
## Features
@@ -31,16 +26,16 @@ design space.
## Who are we?
-The FINN team consists of members of AMD Research under Ivo Bolsens (CTO) and members of CommsDC Solutions Engineering under Allen Chen (AECG-CommsDCSolnEng), working very closely with the Pynq team and Kristof Denolf and Jack Lo for integration with video processing.
+The FINN team consists of members of AMD Research under Ralph Wittig (AMD Research & Advanced Development) and members of Custom & Strategic Engineering under Allen Chen, working very closely with the Pynq team.
-
+
From top left to bottom right: Yaman Umuroglu, Michaela Blott, Alessandro Pappalardo, Lucian Petrica, Nicholas Fraser,
Thomas Preusser, Jakoba Petri-Koenig, Ken O’Brien
-
+
-From top left to bottom right: Eamonn Dunbar, Kasper Feurer, Aziz Bahri, Fionn O'Donohoe, Mirza Mrahorovic
+From top left to bottom right: Eamonn Dunbar, Kasper Feurer, Aziz Bahri, John Monks, Mirza Mrahorovic
diff --git a/docs/img/finn-stack.PNG b/docs/img/finn-stack.PNG
new file mode 100755
index 0000000000..961232c4bf
Binary files /dev/null and b/docs/img/finn-stack.PNG differ
diff --git a/docs/img/finn-stack.png b/docs/img/finn-stack.png
deleted file mode 100644
index e34b1ecb45..0000000000
Binary files a/docs/img/finn-stack.png and /dev/null differ
diff --git a/docs/img/finn-team1.png b/docs/img/finn-team1.png
index 774d8e8ee9..c311720ba8 100755
Binary files a/docs/img/finn-team1.png and b/docs/img/finn-team1.png differ
diff --git a/docs/index.md b/docs/index.md
index 35b5f68742..5066c5cf9e 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,18 +1,13 @@
# FINN
-
+
-FINN is an
-experimental framework from Xilinx Research Labs to explore deep neural network
-inference on FPGAs.
-It specifically targets quantized neural
-networks, with emphasis on
-generating dataflow-style architectures customized for each network.
-It is not
-intended to be a generic DNN accelerator offering like [Vitis AI](https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html), but rather a tool for
-exploring the design space of DNN inference accelerators on FPGAs.
-
-A new, more modular version of the FINN compiler is currently under development on GitHub, and we welcome contributions from the community!
+FINN is a machine learning framework by the Integrated Communications and AI Lab of AMD Research & Advanced Development.
+It provides an end-to-end flow for the exploration and implementation of quantized neural network inference solutions on FPGAs.
+FINN generates dataflow architectures as a physical representation of the implemented custom network in space.
+It is not a generic DNN acceleration solution but relies on co-design and design space exploration for quantization and parallelization tuning so as to optimize a solutions with respect to resource and performance requirements.
+
+The FINN compiler is under active development on GitHub, and we welcome contributions from the community!
## Quickstart