diff --git a/index.html b/index.html index dc7d1f6..0852a28 100755 --- a/index.html +++ b/index.html @@ -54,6 +54,64 @@ +
  • Demo Video +
  • +
  • Installation + +
  • +
  • Usage Instructions + +
  • +
  • Bias Detection Tutorial + +
  • +
  • How to Use Fair-Sense-AI + +
  • +
  • Troubleshooting + +
  • +
  • Prerequisites +
  • +
  • Contact +
  • +
  • License +
  • @@ -86,192 +144,147 @@

    Fair-Sense-AI

    Fair-Sense-AI is a cutting-edge, AI-driven platform designed to promote transparency, fairness, and equity by analyzing bias in textual and visual content. Whether you're addressing societal biases, identifying disinformation, or fostering responsible AI practices, Fair-Sense-AI equips you with the tools to make informed decisions.

    +

    📦 Fair-Sense-AI on PyPI


    Key Features

    📄 Text Analysis

    🖼️ Image Analysis

    📂 Batch Processing

    📜 AI Governance Insights


    -

    Demo Video

    -

    Watch the demonstration of the FairSense platform below:

    +

    Demo Video

    +

    Watch the demonstration of Fair-Sense-AI below:

    + width="500" height="400" allow="autoplay">
    -

    Installing Fair-Sense-AI

    -

    Install the Fair-Sense-AI package using pip:

    -
    pip install Fair-Sense-AI
    +

    Installation

    +

    Install Fair-Sense-AI using pip:

    +
    pip install fair-sense-ai
     
    +

    Dependencies

    +

    Ensure the following prerequisites are met: +1. Python 3.7+ +2. Tesseract OCR for image analysis (installation instructions below).

    +

    Usage Instructions

    -

    Launching the Application

    +

    Launch the Application

    Run the following command to start Fair-Sense-AI:

    -
    Fair-Sense-AI
    +
    fair-sense-ai
     
    -

    This will launch the Gradio-powered interface in your default web browser.

    +

    This will launch a Gradio-powered interface in your default web browser.

    +

    Bias Detection Tutorial

    Setup

    1. Download the Data:
    2. -
    3. Download the data from this Google Drive link.
    4. -
    5. Upload the downloaded files to your environment (e.g., Jupyter Notebook, Google Colab, etc.).
    6. +
    7. Our dataset Newsmediabias-plus
    8. +
    9. Download example datasets from this Google Drive link to check. Upload the files to your environment (e.g., Jupyter Notebook, Google Colab, etc.).
    10. +
    11. Example Google Colab notebook: Run the Tutorial.
    -

    Install Required Packages

    -
    !pip install --quiet fair-sense-ai
    -!pip uninstall sympy -y
    -!pip install sympy --upgrade
    -!apt update
    -!apt install -y tesseract-ocr
    +
    pip install fair-sense-ai
    +pip uninstall sympy -y
    +pip install sympy --upgrade
    +apt update
    +apt install -y tesseract-ocr
     
    -

    Restart your system if you are using Google Colab.
    -Example Colab Notebook: Run the Tutorial


    Code Examples

    -

    1. Text Bias Analysis

    -
    # Import Required Libraries
    -from fairsenseai import analyze_text_for_bias
    +

    Text Bias Analysis

    +
    from fairsenseai import analyze_text_for_bias
     
    -# Example input text to analyze for bias
     text_input = "Women are better at multitasking than men."
     
    -# Analyze the text for bias using FairSense AI
     highlighted_text, detailed_analysis = analyze_text_for_bias(text_input)
     
    -# Print the analysis results
     print("Highlighted Text:", highlighted_text)
     print("Detailed Analysis:", detailed_analysis)
     
    -

    2. Image Bias Analysis

    -
    # Import Required Libraries
    -import requests
    +

    Image Bias Analysis

    +
    from fairsenseai import analyze_image_for_bias
     from PIL import Image
    -from io import BytesIO
    -from fairsenseai import analyze_image_for_bias
    -from IPython.display import display, HTML
    -
    -# URL of the image to analyze
    -image_url = "https://cdn.i-scmp.com/sites/default/files/styles/1200x800/public/images/methode/2018/05/31/20b096c2-64b4-11e8-82ea-2acc56ad2bf7_1280x720_173440.jpg?itok=2I32exTB"
     
    -# Fetch and load the image
    -response = requests.get(image_url)
    -if response.status_code == 200:
    -    # Load the image
    -    image = Image.open(BytesIO(response.content))
    +image = Image.open("example_image.jpg")
     
    -    # Resize the image for smaller display
    -    small_image = image.copy()
    -    small_image.thumbnail((200, 200))  # Maintain aspect ratio while resizing
    +highlighted_caption, image_analysis = analyze_image_for_bias(image)
     
    -    # Display the resized image
    -    print("Original Image (Resized):")
    -    display(small_image)
    -
    -    # Analyze the image for bias
    -    highlighted_caption, image_analysis = analyze_image_for_bias(image)
    -
    -    # Print the analysis results
    -    print("Highlighted Caption:", highlighted_caption)
    -    print("Image Analysis:", image_analysis)
    -
    -    # Display highlighted captions (if available)
    -    if highlighted_caption:
    -        display(HTML(highlighted_caption))
    -else:
    -    print(f"Failed to fetch the image. Status code: {response.status_code}")
    +print("Highlighted Caption:", highlighted_caption)
    +print("Image Analysis:", image_analysis)
     
    -

    3. Launch the Interactive Application

    +

    Launch the Interactive Application

    from fairsenseai import main
     
    -# Launch the Gradio application (will open in the browser)
    -main()
    +main()  # Launches the Gradio interface in a browser
     

    How to Use Fair-Sense-AI

    1. Text Analysis

      -
    • Navigate to the Text Analysis tab in the Gradio interface.
    • -
    • Input or paste the text you want to analyze.
    • +
    • Input text into the Text Analysis tab of the Gradio interface.
    • Click Analyze to detect and highlight biases.

    2. Image Analysis

      -
    • Navigate to the Image Analysis tab.
    • -
    • Upload an image to analyze for biases in embedded text or captions.
    • -
    • Click Analyze to view detailed results.
    • +
    • Upload an image in the Image Analysis tab.
    • +
    • Click Analyze to evaluate biases in captions or embedded text.

    3. Batch Text CSV Analysis

      -
    • Navigate to the Batch Text CSV Analysis tab.
    • -
    • Upload a CSV file with a column named text.
    • -
    • Click Analyze CSV to process and analyze all entries.
    • +
    • Upload a CSV file with a text column.
    • +
    • Click Analyze CSV to process and flag all entries.

    4. Batch Image Analysis

      -
    • Navigate to the Batch Image Analysis tab.
    • -
    • Upload multiple images to analyze biases in captions or embedded text.
    • +
    • Upload multiple images in the Batch Image Analysis tab.
    • Click Analyze Images to view results.

    5. AI Governance Insights

    • Navigate to the AI Governance and Safety tab.
    • -
    • Choose a predefined topic or input your own.
    • -
    • Click Get Insights for actionable recommendations.
    • +
    • Select a predefined topic or input your own.
    • +
    • Click Get Insights for detailed recommendations.

    Troubleshooting

    Common Issues

      -
    • Models Download Slowly:
    • -

      On first use, models are downloaded automatically. Ensure you have a stable internet connection.

      +

      Slow Model Downloads:
      + First-time users may experience slow downloads. Ensure a stable internet connection.

    • -

      Tesseract Not Found:

      +

      Tesseract Missing:
      + Verify Tesseract is installed and accessible in your system's PATH.

    • -

      Verify Tesseract is installed and accessible in your system's PATH.

      +

      GPU Acceleration:
      + Install PyTorch with CUDA support for faster processing.

    • -
    • -

      GPU Support:

      -
    • -
    • Install PyTorch with CUDA support if you want GPU acceleration.
    -

    bash - pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

    +
    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
    +

    -

    Further instructions

    -

    sample data

    -
      -
    • A sample CSV file with a text column.
    • -
    • Sample images for analysis.
    • -
    -

    Prerequisites

    +

    Prerequisites

      -
    1. Python 3.7+
    2. -
    3. -

      Ensure you have Python installed. Download it here.

      -
    4. -
    5. -

      Tesseract OCR

      -
    6. -
    7. Required for extracting text from images.
    8. +
    9. Python 3.7+: Download here.
    10. +
    11. Tesseract OCR: Required for image text extraction.

    #### Installation Instructions: - Ubuntu: @@ -282,7 +295,7 @@

    Further instructions

    bash brew install tesseract - Windows: - - Download and install Tesseract OCR from this link.

    + Download Tesseract OCR from this link.


    Contact

    For inquiries or support, contact:
    @@ -341,5 +354,5 @@

    License

    diff --git a/search/search_index.json b/search/search_index.json index 2ff3f12..e94b10b 100755 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Fair-Sense-AI Fair-Sense-AI is a cutting-edge, AI-driven platform designed to promote transparency, fairness, and equity by analyzing bias in textual and visual content. Whether you're addressing societal biases, identifying disinformation, or fostering responsible AI practices, Fair-Sense-AI equips you with the tools to make informed decisions. Key Features \ud83d\udcc4 Text Analysis Detect and highlight biases within text, such as targeted language or phrases. Provide actionable feedback on the tone and fairness of the content. \ud83d\uddbc\ufe0f Image Analysis Extract embedded text from images and analyze it for potential biases. Generate captions for images and evaluate their fairness and inclusivity. \ud83d\udcc2 Batch Processing Analyze large datasets of text or images efficiently. Automatically highlight problematic patterns across entire datasets. \ud83d\udcdc AI Governance Insights Gain detailed insights into ethical AI practices, fairness guidelines, and bias mitigation strategies. Explore topics like data privacy, transparency, and responsible AI deployment. Demo Video Watch the demonstration of the FairSense platform below: Installing Fair-Sense-AI Install the Fair-Sense-AI package using pip: pip install Fair-Sense-AI Usage Instructions Launching the Application Run the following command to start Fair-Sense-AI: Fair-Sense-AI This will launch the Gradio-powered interface in your default web browser. Bias Detection Tutorial Setup Download the Data : Download the data from this Google Drive link . Upload the downloaded files to your environment (e.g., Jupyter Notebook, Google Colab, etc.). Install Required Packages !pip install --quiet fair-sense-ai !pip uninstall sympy -y !pip install sympy --upgrade !apt update !apt install -y tesseract-ocr Restart your system if you are using Google Colab. Example Colab Notebook: Run the Tutorial Code Examples 1. Text Bias Analysis # Import Required Libraries from fairsenseai import analyze_text_for_bias # Example input text to analyze for bias text_input = \"Women are better at multitasking than men.\" # Analyze the text for bias using FairSense AI highlighted_text, detailed_analysis = analyze_text_for_bias(text_input) # Print the analysis results print(\"Highlighted Text:\", highlighted_text) print(\"Detailed Analysis:\", detailed_analysis) 2. Image Bias Analysis # Import Required Libraries import requests from PIL import Image from io import BytesIO from fairsenseai import analyze_image_for_bias from IPython.display import display, HTML # URL of the image to analyze image_url = \"https://cdn.i-scmp.com/sites/default/files/styles/1200x800/public/images/methode/2018/05/31/20b096c2-64b4-11e8-82ea-2acc56ad2bf7_1280x720_173440.jpg?itok=2I32exTB\" # Fetch and load the image response = requests.get(image_url) if response.status_code == 200: # Load the image image = Image.open(BytesIO(response.content)) # Resize the image for smaller display small_image = image.copy() small_image.thumbnail((200, 200)) # Maintain aspect ratio while resizing # Display the resized image print(\"Original Image (Resized):\") display(small_image) # Analyze the image for bias highlighted_caption, image_analysis = analyze_image_for_bias(image) # Print the analysis results print(\"Highlighted Caption:\", highlighted_caption) print(\"Image Analysis:\", image_analysis) # Display highlighted captions (if available) if highlighted_caption: display(HTML(highlighted_caption)) else: print(f\"Failed to fetch the image. Status code: {response.status_code}\") 3. Launch the Interactive Application from fairsenseai import main # Launch the Gradio application (will open in the browser) main() How to Use Fair-Sense-AI 1. Text Analysis Navigate to the Text Analysis tab in the Gradio interface. Input or paste the text you want to analyze. Click Analyze to detect and highlight biases. 2. Image Analysis Navigate to the Image Analysis tab. Upload an image to analyze for biases in embedded text or captions. Click Analyze to view detailed results. 3. Batch Text CSV Analysis Navigate to the Batch Text CSV Analysis tab. Upload a CSV file with a column named text . Click Analyze CSV to process and analyze all entries. 4. Batch Image Analysis Navigate to the Batch Image Analysis tab. Upload multiple images to analyze biases in captions or embedded text. Click Analyze Images to view results. 5. AI Governance Insights Navigate to the AI Governance and Safety tab. Choose a predefined topic or input your own. Click Get Insights for actionable recommendations. Troubleshooting Common Issues Models Download Slowly : On first use, models are downloaded automatically. Ensure you have a stable internet connection. Tesseract Not Found : Verify Tesseract is installed and accessible in your system's PATH. GPU Support : Install PyTorch with CUDA support if you want GPU acceleration. bash pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 Further instructions sample data A sample CSV file with a text column. Sample images for analysis. Prerequisites Python 3.7+ Ensure you have Python installed. Download it here . Tesseract OCR Required for extracting text from images. #### Installation Instructions: - Ubuntu : bash sudo apt-get update sudo apt-get install tesseract-ocr - macOS (Homebrew) : bash brew install tesseract - Windows : - Download and install Tesseract OCR from this link . Contact For inquiries or support, contact: Shaina Raza, PhD Applied ML Scientist, Responsible AI shaina.raza@vectorinstitute.ai License This project is licensed under the Creative Commons License .","title":"API Reference"},{"location":"#fair-sense-ai","text":"Fair-Sense-AI is a cutting-edge, AI-driven platform designed to promote transparency, fairness, and equity by analyzing bias in textual and visual content. Whether you're addressing societal biases, identifying disinformation, or fostering responsible AI practices, Fair-Sense-AI equips you with the tools to make informed decisions.","title":"Fair-Sense-AI"},{"location":"#key-features","text":"","title":"Key Features"},{"location":"#text-analysis","text":"Detect and highlight biases within text, such as targeted language or phrases. Provide actionable feedback on the tone and fairness of the content.","title":"\ud83d\udcc4 Text Analysis"},{"location":"#image-analysis","text":"Extract embedded text from images and analyze it for potential biases. Generate captions for images and evaluate their fairness and inclusivity.","title":"\ud83d\uddbc\ufe0f Image Analysis"},{"location":"#batch-processing","text":"Analyze large datasets of text or images efficiently. Automatically highlight problematic patterns across entire datasets.","title":"\ud83d\udcc2 Batch Processing"},{"location":"#ai-governance-insights","text":"Gain detailed insights into ethical AI practices, fairness guidelines, and bias mitigation strategies. Explore topics like data privacy, transparency, and responsible AI deployment.","title":"\ud83d\udcdc AI Governance Insights"},{"location":"#demo-video","text":"Watch the demonstration of the FairSense platform below:","title":"Demo Video"},{"location":"#installing-fair-sense-ai","text":"Install the Fair-Sense-AI package using pip: pip install Fair-Sense-AI","title":"Installing Fair-Sense-AI"},{"location":"#usage-instructions","text":"","title":"Usage Instructions"},{"location":"#launching-the-application","text":"Run the following command to start Fair-Sense-AI: Fair-Sense-AI This will launch the Gradio-powered interface in your default web browser.","title":"Launching the Application"},{"location":"#bias-detection-tutorial","text":"","title":"Bias Detection Tutorial"},{"location":"#setup","text":"Download the Data : Download the data from this Google Drive link . Upload the downloaded files to your environment (e.g., Jupyter Notebook, Google Colab, etc.).","title":"Setup"},{"location":"#install-required-packages","text":"!pip install --quiet fair-sense-ai !pip uninstall sympy -y !pip install sympy --upgrade !apt update !apt install -y tesseract-ocr Restart your system if you are using Google Colab. Example Colab Notebook: Run the Tutorial","title":"Install Required Packages"},{"location":"#code-examples","text":"","title":"Code Examples"},{"location":"#1-text-bias-analysis","text":"# Import Required Libraries from fairsenseai import analyze_text_for_bias # Example input text to analyze for bias text_input = \"Women are better at multitasking than men.\" # Analyze the text for bias using FairSense AI highlighted_text, detailed_analysis = analyze_text_for_bias(text_input) # Print the analysis results print(\"Highlighted Text:\", highlighted_text) print(\"Detailed Analysis:\", detailed_analysis)","title":"1. Text Bias Analysis"},{"location":"#2-image-bias-analysis","text":"# Import Required Libraries import requests from PIL import Image from io import BytesIO from fairsenseai import analyze_image_for_bias from IPython.display import display, HTML # URL of the image to analyze image_url = \"https://cdn.i-scmp.com/sites/default/files/styles/1200x800/public/images/methode/2018/05/31/20b096c2-64b4-11e8-82ea-2acc56ad2bf7_1280x720_173440.jpg?itok=2I32exTB\" # Fetch and load the image response = requests.get(image_url) if response.status_code == 200: # Load the image image = Image.open(BytesIO(response.content)) # Resize the image for smaller display small_image = image.copy() small_image.thumbnail((200, 200)) # Maintain aspect ratio while resizing # Display the resized image print(\"Original Image (Resized):\") display(small_image) # Analyze the image for bias highlighted_caption, image_analysis = analyze_image_for_bias(image) # Print the analysis results print(\"Highlighted Caption:\", highlighted_caption) print(\"Image Analysis:\", image_analysis) # Display highlighted captions (if available) if highlighted_caption: display(HTML(highlighted_caption)) else: print(f\"Failed to fetch the image. Status code: {response.status_code}\")","title":"2. Image Bias Analysis"},{"location":"#3-launch-the-interactive-application","text":"from fairsenseai import main # Launch the Gradio application (will open in the browser) main()","title":"3. Launch the Interactive Application"},{"location":"#how-to-use-fair-sense-ai","text":"","title":"How to Use Fair-Sense-AI"},{"location":"#1-text-analysis","text":"Navigate to the Text Analysis tab in the Gradio interface. Input or paste the text you want to analyze. Click Analyze to detect and highlight biases.","title":"1. Text Analysis"},{"location":"#2-image-analysis","text":"Navigate to the Image Analysis tab. Upload an image to analyze for biases in embedded text or captions. Click Analyze to view detailed results.","title":"2. Image Analysis"},{"location":"#3-batch-text-csv-analysis","text":"Navigate to the Batch Text CSV Analysis tab. Upload a CSV file with a column named text . Click Analyze CSV to process and analyze all entries.","title":"3. Batch Text CSV Analysis"},{"location":"#4-batch-image-analysis","text":"Navigate to the Batch Image Analysis tab. Upload multiple images to analyze biases in captions or embedded text. Click Analyze Images to view results.","title":"4. Batch Image Analysis"},{"location":"#5-ai-governance-insights","text":"Navigate to the AI Governance and Safety tab. Choose a predefined topic or input your own. Click Get Insights for actionable recommendations.","title":"5. AI Governance Insights"},{"location":"#troubleshooting","text":"","title":"Troubleshooting"},{"location":"#common-issues","text":"Models Download Slowly : On first use, models are downloaded automatically. Ensure you have a stable internet connection. Tesseract Not Found : Verify Tesseract is installed and accessible in your system's PATH. GPU Support : Install PyTorch with CUDA support if you want GPU acceleration. bash pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117","title":"Common Issues"},{"location":"#further-instructions","text":"sample data A sample CSV file with a text column. Sample images for analysis. Prerequisites Python 3.7+ Ensure you have Python installed. Download it here . Tesseract OCR Required for extracting text from images. #### Installation Instructions: - Ubuntu : bash sudo apt-get update sudo apt-get install tesseract-ocr - macOS (Homebrew) : bash brew install tesseract - Windows : - Download and install Tesseract OCR from this link .","title":"Further instructions"},{"location":"#contact","text":"For inquiries or support, contact: Shaina Raza, PhD Applied ML Scientist, Responsible AI shaina.raza@vectorinstitute.ai","title":"Contact"},{"location":"#license","text":"This project is licensed under the Creative Commons License .","title":"License"},{"location":"documentation/","text":"FairSense API Documentation Introduction FairSense is an AI-driven platform for detecting and analyzing bias in textual and visual content. This document outlines the key functions and APIs provided by FairSense for integration and usage. We are releasing a multimodal bias detection toolkit 1. Core Components Device Setup Sets up the device for model computation (CPU or GPU). DEVICE = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") Model Initialization Preloads the required models: - Text Model : unsloth/Llama-3.2-1B-Instruct or meta-llama/Llama-3.2-1B-Instruct or any instruct model - Image Captioning Model : Salesforce/blip-image-captioning-large - Summarizer : sshleifer/distilbart-cnn-12-6 2. Helper Functions post_process_response(response) Purpose : Cleans and summarizes AI model responses. Parameters : response (str) : The raw response from the AI model. Returns : A cleaned and summarized response string. Example : python processed_response = post_process_response(\"This is the raw response from the model.\") print(processed_response) highlight_bias(text, bias_words) Purpose : Highlights specific biased words in the text. Parameters : text (str) : The input text to analyze. bias_words (list) : A list of words to highlight as biased. Returns : HTML-formatted text with highlighted bias words. Example : python highlighted_text = highlight_bias(\"This is a biased statement.\", [\"biased\"]) print(highlighted_text) 3. Text Analysis generate_response_with_model(prompt, progress=None) Purpose : Generates a response from the AI model for a given prompt. Parameters : prompt (str) : The input prompt for the model. progress (callable, optional) : Function to track progress. Returns : AI-generated response as a string. Example : python response = generate_response_with_model(\"Analyze this text for bias.\") print(response) analyze_text_for_bias(text_input, progress=gr.Progress()) Purpose : Analyzes a given text for bias and provides a detailed analysis. Parameters : text_input (str) : Text to analyze. progress (gr.Progress) : Progress tracker. Returns : Highlighted text and detailed analysis. Example : python highlighted, analysis = analyze_text_for_bias(\"This text may contain bias.\") print(highlighted) print(analysis) 4. Image Analysis preprocess_image(image) Purpose : Converts images to grayscale and applies thresholding for OCR. Parameters : image (PIL.Image) : The input image. Returns : A preprocessed image for OCR. Example : python from PIL import Image image = Image.open(\"example.jpg\") preprocessed = preprocess_image(image) preprocessed.show() analyze_image_for_bias(image, progress=gr.Progress()) Purpose : Analyzes an image for bias by extracting text and generating captions. Parameters : image (PIL.Image) : The input image. progress (gr.Progress) : Progress tracker. Returns : Highlighted captions and detailed analysis. Example : python image = Image.open(\"example.jpg\") highlighted, analysis = analyze_image_for_bias(image) print(highlighted) print(analysis) 5. Batch Processing analyze_text_csv(file, output_filename=\"analysis_results.csv\") Purpose : Analyzes a CSV file of text entries for bias. Parameters : file (File) : CSV file with text data. output_filename (str) : Name of the output CSV file. Returns : An HTML table with analysis results. Example : python html_table = analyze_text_csv(\"data.csv\") print(html_table) analyze_images_batch(images, output_filename=\"image_analysis_results.csv\") Purpose : Analyzes multiple images for bias. Parameters : images (list) : List of image paths. output_filename (str) : Name of the output file. Returns : HTML table with analysis results and image previews. Example : python results = analyze_images_batch([\"image1.jpg\", \"image2.png\"]) print(results) save_results_to_csv(df, filename=\"results.csv\") Purpose : Saves analysis results to a CSV file. Parameters : df (pandas.DataFrame) : DataFrame containing results. filename (str) : Name of the output file. Returns : Path to the saved file. Example : python results_df = pd.DataFrame([{\"text\": \"example\", \"analysis\": \"unbiased\"}]) save_path = save_results_to_csv(results_df, \"output.csv\") print(save_path) 6. AI Governance ai_governance_response(prompt, progress=None) Purpose : Provides insights into AI governance and safety. Parameters : prompt (str) : Topic or question about AI governance. progress (callable, optional) : Progress tracker. Returns : AI-generated insights and recommendations. Example : python insights = ai_governance_response(\"Discuss AI ethics.\") print(insights) 7. AI Safety Dashboard display_ai_safety_dashboard() Purpose : Visualizes AI safety risks using interactive charts. Returns : Tuple containing bar chart, pie chart, scatter plot, and DataFrame. Example : python fig_bar, fig_pie, fig_scatter, risks_df = display_ai_safety_dashboard() fig_bar.show() Next Steps This documentation provides the foundation for integrating FairSense into your workflows. Contact : For inquiries, collaborations, or feedback, connect with Shaina Raza, PhD , at shaina.raza@vectorinstitute.ai . Let me know if you need anything else added! \ud83d\ude0a","title":"Documentation"},{"location":"documentation/#fairsense-api-documentation","text":"","title":"FairSense API Documentation"},{"location":"documentation/#introduction","text":"FairSense is an AI-driven platform for detecting and analyzing bias in textual and visual content. This document outlines the key functions and APIs provided by FairSense for integration and usage. We are releasing a multimodal bias detection toolkit","title":"Introduction"},{"location":"documentation/#1-core-components","text":"","title":"1. Core Components"},{"location":"documentation/#device-setup","text":"Sets up the device for model computation (CPU or GPU). DEVICE = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")","title":"Device Setup"},{"location":"documentation/#model-initialization","text":"Preloads the required models: - Text Model : unsloth/Llama-3.2-1B-Instruct or meta-llama/Llama-3.2-1B-Instruct or any instruct model - Image Captioning Model : Salesforce/blip-image-captioning-large - Summarizer : sshleifer/distilbart-cnn-12-6","title":"Model Initialization"},{"location":"documentation/#2-helper-functions","text":"","title":"2. Helper Functions"},{"location":"documentation/#post_process_responseresponse","text":"Purpose : Cleans and summarizes AI model responses. Parameters : response (str) : The raw response from the AI model. Returns : A cleaned and summarized response string. Example : python processed_response = post_process_response(\"This is the raw response from the model.\") print(processed_response)","title":"post_process_response(response)"},{"location":"documentation/#highlight_biastext-bias_words","text":"Purpose : Highlights specific biased words in the text. Parameters : text (str) : The input text to analyze. bias_words (list) : A list of words to highlight as biased. Returns : HTML-formatted text with highlighted bias words. Example : python highlighted_text = highlight_bias(\"This is a biased statement.\", [\"biased\"]) print(highlighted_text)","title":"highlight_bias(text, bias_words)"},{"location":"documentation/#3-text-analysis","text":"","title":"3. Text Analysis"},{"location":"documentation/#generate_response_with_modelprompt-progressnone","text":"Purpose : Generates a response from the AI model for a given prompt. Parameters : prompt (str) : The input prompt for the model. progress (callable, optional) : Function to track progress. Returns : AI-generated response as a string. Example : python response = generate_response_with_model(\"Analyze this text for bias.\") print(response)","title":"generate_response_with_model(prompt, progress=None)"},{"location":"documentation/#analyze_text_for_biastext_input-progressgrprogress","text":"Purpose : Analyzes a given text for bias and provides a detailed analysis. Parameters : text_input (str) : Text to analyze. progress (gr.Progress) : Progress tracker. Returns : Highlighted text and detailed analysis. Example : python highlighted, analysis = analyze_text_for_bias(\"This text may contain bias.\") print(highlighted) print(analysis)","title":"analyze_text_for_bias(text_input, progress=gr.Progress())"},{"location":"documentation/#4-image-analysis","text":"","title":"4. Image Analysis"},{"location":"documentation/#preprocess_imageimage","text":"Purpose : Converts images to grayscale and applies thresholding for OCR. Parameters : image (PIL.Image) : The input image. Returns : A preprocessed image for OCR. Example : python from PIL import Image image = Image.open(\"example.jpg\") preprocessed = preprocess_image(image) preprocessed.show()","title":"preprocess_image(image)"},{"location":"documentation/#analyze_image_for_biasimage-progressgrprogress","text":"Purpose : Analyzes an image for bias by extracting text and generating captions. Parameters : image (PIL.Image) : The input image. progress (gr.Progress) : Progress tracker. Returns : Highlighted captions and detailed analysis. Example : python image = Image.open(\"example.jpg\") highlighted, analysis = analyze_image_for_bias(image) print(highlighted) print(analysis)","title":"analyze_image_for_bias(image, progress=gr.Progress())"},{"location":"documentation/#5-batch-processing","text":"","title":"5. Batch Processing"},{"location":"documentation/#analyze_text_csvfile-output_filenameanalysis_resultscsv","text":"Purpose : Analyzes a CSV file of text entries for bias. Parameters : file (File) : CSV file with text data. output_filename (str) : Name of the output CSV file. Returns : An HTML table with analysis results. Example : python html_table = analyze_text_csv(\"data.csv\") print(html_table)","title":"analyze_text_csv(file, output_filename=\"analysis_results.csv\")"},{"location":"documentation/#analyze_images_batchimages-output_filenameimage_analysis_resultscsv","text":"Purpose : Analyzes multiple images for bias. Parameters : images (list) : List of image paths. output_filename (str) : Name of the output file. Returns : HTML table with analysis results and image previews. Example : python results = analyze_images_batch([\"image1.jpg\", \"image2.png\"]) print(results)","title":"analyze_images_batch(images, output_filename=\"image_analysis_results.csv\")"},{"location":"documentation/#save_results_to_csvdf-filenameresultscsv","text":"Purpose : Saves analysis results to a CSV file. Parameters : df (pandas.DataFrame) : DataFrame containing results. filename (str) : Name of the output file. Returns : Path to the saved file. Example : python results_df = pd.DataFrame([{\"text\": \"example\", \"analysis\": \"unbiased\"}]) save_path = save_results_to_csv(results_df, \"output.csv\") print(save_path)","title":"save_results_to_csv(df, filename=\"results.csv\")"},{"location":"documentation/#6-ai-governance","text":"","title":"6. AI Governance"},{"location":"documentation/#ai_governance_responseprompt-progressnone","text":"Purpose : Provides insights into AI governance and safety. Parameters : prompt (str) : Topic or question about AI governance. progress (callable, optional) : Progress tracker. Returns : AI-generated insights and recommendations. Example : python insights = ai_governance_response(\"Discuss AI ethics.\") print(insights)","title":"ai_governance_response(prompt, progress=None)"},{"location":"documentation/#7-ai-safety-dashboard","text":"","title":"7. AI Safety Dashboard"},{"location":"documentation/#display_ai_safety_dashboard","text":"Purpose : Visualizes AI safety risks using interactive charts. Returns : Tuple containing bar chart, pie chart, scatter plot, and DataFrame. Example : python fig_bar, fig_pie, fig_scatter, risks_df = display_ai_safety_dashboard() fig_bar.show()","title":"display_ai_safety_dashboard()"},{"location":"documentation/#next-steps","text":"This documentation provides the foundation for integrating FairSense into your workflows. Contact : For inquiries, collaborations, or feedback, connect with Shaina Raza, PhD , at shaina.raza@vectorinstitute.ai . Let me know if you need anything else added! \ud83d\ude0a","title":"Next Steps"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Fair-Sense-AI Fair-Sense-AI is a cutting-edge, AI-driven platform designed to promote transparency, fairness, and equity by analyzing bias in textual and visual content. Whether you're addressing societal biases, identifying disinformation, or fostering responsible AI practices, Fair-Sense-AI equips you with the tools to make informed decisions. \ud83d\udce6 Fair-Sense-AI on PyPI Key Features \ud83d\udcc4 Text Analysis Detect and highlight biases within text, such as targeted language or stereotypes. Provide actionable feedback to improve the fairness of content. \ud83d\uddbc\ufe0f Image Analysis Extract embedded text from images and evaluate it for potential biases. Generate and assess image captions for fairness and inclusivity. \ud83d\udcc2 Batch Processing Efficiently analyze large datasets of text or images. Automatically flag problematic patterns across datasets. \ud83d\udcdc AI Governance Insights Gain insights into ethical AI practices and bias mitigation strategies. Explore topics such as data privacy, transparency, and responsible AI deployment. Demo Video Watch the demonstration of Fair-Sense-AI below: Installation Install Fair-Sense-AI using pip: pip install fair-sense-ai Dependencies Ensure the following prerequisites are met: 1. Python 3.7+ 2. Tesseract OCR for image analysis (installation instructions below). Usage Instructions Launch the Application Run the following command to start Fair-Sense-AI: fair-sense-ai This will launch a Gradio-powered interface in your default web browser. Bias Detection Tutorial Setup Download the Data : Our dataset Newsmediabias-plus Download example datasets from this Google Drive link to check. Upload the files to your environment (e.g., Jupyter Notebook, Google Colab, etc.). Example Google Colab notebook: Run the Tutorial . Install Required Packages pip install fair-sense-ai pip uninstall sympy -y pip install sympy --upgrade apt update apt install -y tesseract-ocr Code Examples Text Bias Analysis from fairsenseai import analyze_text_for_bias text_input = \"Women are better at multitasking than men.\" highlighted_text, detailed_analysis = analyze_text_for_bias(text_input) print(\"Highlighted Text:\", highlighted_text) print(\"Detailed Analysis:\", detailed_analysis) Image Bias Analysis from fairsenseai import analyze_image_for_bias from PIL import Image image = Image.open(\"example_image.jpg\") highlighted_caption, image_analysis = analyze_image_for_bias(image) print(\"Highlighted Caption:\", highlighted_caption) print(\"Image Analysis:\", image_analysis) Launch the Interactive Application from fairsenseai import main main() # Launches the Gradio interface in a browser How to Use Fair-Sense-AI 1. Text Analysis Input text into the Text Analysis tab of the Gradio interface. Click Analyze to detect and highlight biases. 2. Image Analysis Upload an image in the Image Analysis tab. Click Analyze to evaluate biases in captions or embedded text. 3. Batch Text CSV Analysis Upload a CSV file with a text column. Click Analyze CSV to process and flag all entries. 4. Batch Image Analysis Upload multiple images in the Batch Image Analysis tab. Click Analyze Images to view results. 5. AI Governance Insights Navigate to the AI Governance and Safety tab. Select a predefined topic or input your own. Click Get Insights for detailed recommendations. Troubleshooting Common Issues Slow Model Downloads : First-time users may experience slow downloads. Ensure a stable internet connection. Tesseract Missing : Verify Tesseract is installed and accessible in your system's PATH. GPU Acceleration : Install PyTorch with CUDA support for faster processing. pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 Prerequisites Python 3.7+ : Download here . Tesseract OCR : Required for image text extraction. #### Installation Instructions: - Ubuntu : bash sudo apt-get update sudo apt-get install tesseract-ocr - macOS (Homebrew) : bash brew install tesseract - Windows : Download Tesseract OCR from this link . Contact For inquiries or support, contact: Shaina Raza, PhD Applied ML Scientist, Responsible AI shaina.raza@vectorinstitute.ai License This project is licensed under the Creative Commons License .","title":"API Reference"},{"location":"#fair-sense-ai","text":"Fair-Sense-AI is a cutting-edge, AI-driven platform designed to promote transparency, fairness, and equity by analyzing bias in textual and visual content. Whether you're addressing societal biases, identifying disinformation, or fostering responsible AI practices, Fair-Sense-AI equips you with the tools to make informed decisions. \ud83d\udce6 Fair-Sense-AI on PyPI","title":"Fair-Sense-AI"},{"location":"#key-features","text":"","title":"Key Features"},{"location":"#text-analysis","text":"Detect and highlight biases within text, such as targeted language or stereotypes. Provide actionable feedback to improve the fairness of content.","title":"\ud83d\udcc4 Text Analysis"},{"location":"#image-analysis","text":"Extract embedded text from images and evaluate it for potential biases. Generate and assess image captions for fairness and inclusivity.","title":"\ud83d\uddbc\ufe0f Image Analysis"},{"location":"#batch-processing","text":"Efficiently analyze large datasets of text or images. Automatically flag problematic patterns across datasets.","title":"\ud83d\udcc2 Batch Processing"},{"location":"#ai-governance-insights","text":"Gain insights into ethical AI practices and bias mitigation strategies. Explore topics such as data privacy, transparency, and responsible AI deployment.","title":"\ud83d\udcdc AI Governance Insights"},{"location":"#demo-video","text":"Watch the demonstration of Fair-Sense-AI below:","title":"Demo Video"},{"location":"#installation","text":"Install Fair-Sense-AI using pip: pip install fair-sense-ai","title":"Installation"},{"location":"#dependencies","text":"Ensure the following prerequisites are met: 1. Python 3.7+ 2. Tesseract OCR for image analysis (installation instructions below).","title":"Dependencies"},{"location":"#usage-instructions","text":"","title":"Usage Instructions"},{"location":"#launch-the-application","text":"Run the following command to start Fair-Sense-AI: fair-sense-ai This will launch a Gradio-powered interface in your default web browser.","title":"Launch the Application"},{"location":"#bias-detection-tutorial","text":"","title":"Bias Detection Tutorial"},{"location":"#setup","text":"Download the Data : Our dataset Newsmediabias-plus Download example datasets from this Google Drive link to check. Upload the files to your environment (e.g., Jupyter Notebook, Google Colab, etc.). Example Google Colab notebook: Run the Tutorial .","title":"Setup"},{"location":"#install-required-packages","text":"pip install fair-sense-ai pip uninstall sympy -y pip install sympy --upgrade apt update apt install -y tesseract-ocr","title":"Install Required Packages"},{"location":"#code-examples","text":"","title":"Code Examples"},{"location":"#text-bias-analysis","text":"from fairsenseai import analyze_text_for_bias text_input = \"Women are better at multitasking than men.\" highlighted_text, detailed_analysis = analyze_text_for_bias(text_input) print(\"Highlighted Text:\", highlighted_text) print(\"Detailed Analysis:\", detailed_analysis)","title":"Text Bias Analysis"},{"location":"#image-bias-analysis","text":"from fairsenseai import analyze_image_for_bias from PIL import Image image = Image.open(\"example_image.jpg\") highlighted_caption, image_analysis = analyze_image_for_bias(image) print(\"Highlighted Caption:\", highlighted_caption) print(\"Image Analysis:\", image_analysis)","title":"Image Bias Analysis"},{"location":"#launch-the-interactive-application","text":"from fairsenseai import main main() # Launches the Gradio interface in a browser","title":"Launch the Interactive Application"},{"location":"#how-to-use-fair-sense-ai","text":"","title":"How to Use Fair-Sense-AI"},{"location":"#1-text-analysis","text":"Input text into the Text Analysis tab of the Gradio interface. Click Analyze to detect and highlight biases.","title":"1. Text Analysis"},{"location":"#2-image-analysis","text":"Upload an image in the Image Analysis tab. Click Analyze to evaluate biases in captions or embedded text.","title":"2. Image Analysis"},{"location":"#3-batch-text-csv-analysis","text":"Upload a CSV file with a text column. Click Analyze CSV to process and flag all entries.","title":"3. Batch Text CSV Analysis"},{"location":"#4-batch-image-analysis","text":"Upload multiple images in the Batch Image Analysis tab. Click Analyze Images to view results.","title":"4. Batch Image Analysis"},{"location":"#5-ai-governance-insights","text":"Navigate to the AI Governance and Safety tab. Select a predefined topic or input your own. Click Get Insights for detailed recommendations.","title":"5. AI Governance Insights"},{"location":"#troubleshooting","text":"","title":"Troubleshooting"},{"location":"#common-issues","text":"Slow Model Downloads : First-time users may experience slow downloads. Ensure a stable internet connection. Tesseract Missing : Verify Tesseract is installed and accessible in your system's PATH. GPU Acceleration : Install PyTorch with CUDA support for faster processing. pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117","title":"Common Issues"},{"location":"#prerequisites","text":"Python 3.7+ : Download here . Tesseract OCR : Required for image text extraction. #### Installation Instructions: - Ubuntu : bash sudo apt-get update sudo apt-get install tesseract-ocr - macOS (Homebrew) : bash brew install tesseract - Windows : Download Tesseract OCR from this link .","title":"Prerequisites"},{"location":"#contact","text":"For inquiries or support, contact: Shaina Raza, PhD Applied ML Scientist, Responsible AI shaina.raza@vectorinstitute.ai","title":"Contact"},{"location":"#license","text":"This project is licensed under the Creative Commons License .","title":"License"},{"location":"documentation/","text":"FairSense API Documentation Introduction FairSense is an AI-driven platform for detecting and analyzing bias in textual and visual content. This document outlines the key functions and APIs provided by FairSense for integration and usage. We are releasing a multimodal bias detection toolkit 1. Core Components Device Setup Sets up the device for model computation (CPU or GPU). DEVICE = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") Model Initialization Preloads the required models: - Text Model : unsloth/Llama-3.2-1B-Instruct or meta-llama/Llama-3.2-1B-Instruct or any instruct model - Image Captioning Model : Salesforce/blip-image-captioning-large - Summarizer : sshleifer/distilbart-cnn-12-6 2. Helper Functions post_process_response(response) Purpose : Cleans and summarizes AI model responses. Parameters : response (str) : The raw response from the AI model. Returns : A cleaned and summarized response string. Example : python processed_response = post_process_response(\"This is the raw response from the model.\") print(processed_response) highlight_bias(text, bias_words) Purpose : Highlights specific biased words in the text. Parameters : text (str) : The input text to analyze. bias_words (list) : A list of words to highlight as biased. Returns : HTML-formatted text with highlighted bias words. Example : python highlighted_text = highlight_bias(\"This is a biased statement.\", [\"biased\"]) print(highlighted_text) 3. Text Analysis generate_response_with_model(prompt, progress=None) Purpose : Generates a response from the AI model for a given prompt. Parameters : prompt (str) : The input prompt for the model. progress (callable, optional) : Function to track progress. Returns : AI-generated response as a string. Example : python response = generate_response_with_model(\"Analyze this text for bias.\") print(response) analyze_text_for_bias(text_input, progress=gr.Progress()) Purpose : Analyzes a given text for bias and provides a detailed analysis. Parameters : text_input (str) : Text to analyze. progress (gr.Progress) : Progress tracker. Returns : Highlighted text and detailed analysis. Example : python highlighted, analysis = analyze_text_for_bias(\"This text may contain bias.\") print(highlighted) print(analysis) 4. Image Analysis preprocess_image(image) Purpose : Converts images to grayscale and applies thresholding for OCR. Parameters : image (PIL.Image) : The input image. Returns : A preprocessed image for OCR. Example : python from PIL import Image image = Image.open(\"example.jpg\") preprocessed = preprocess_image(image) preprocessed.show() analyze_image_for_bias(image, progress=gr.Progress()) Purpose : Analyzes an image for bias by extracting text and generating captions. Parameters : image (PIL.Image) : The input image. progress (gr.Progress) : Progress tracker. Returns : Highlighted captions and detailed analysis. Example : python image = Image.open(\"example.jpg\") highlighted, analysis = analyze_image_for_bias(image) print(highlighted) print(analysis) 5. Batch Processing analyze_text_csv(file, output_filename=\"analysis_results.csv\") Purpose : Analyzes a CSV file of text entries for bias. Parameters : file (File) : CSV file with text data. output_filename (str) : Name of the output CSV file. Returns : An HTML table with analysis results. Example : python html_table = analyze_text_csv(\"data.csv\") print(html_table) analyze_images_batch(images, output_filename=\"image_analysis_results.csv\") Purpose : Analyzes multiple images for bias. Parameters : images (list) : List of image paths. output_filename (str) : Name of the output file. Returns : HTML table with analysis results and image previews. Example : python results = analyze_images_batch([\"image1.jpg\", \"image2.png\"]) print(results) save_results_to_csv(df, filename=\"results.csv\") Purpose : Saves analysis results to a CSV file. Parameters : df (pandas.DataFrame) : DataFrame containing results. filename (str) : Name of the output file. Returns : Path to the saved file. Example : python results_df = pd.DataFrame([{\"text\": \"example\", \"analysis\": \"unbiased\"}]) save_path = save_results_to_csv(results_df, \"output.csv\") print(save_path) 6. AI Governance ai_governance_response(prompt, progress=None) Purpose : Provides insights into AI governance and safety. Parameters : prompt (str) : Topic or question about AI governance. progress (callable, optional) : Progress tracker. Returns : AI-generated insights and recommendations. Example : python insights = ai_governance_response(\"Discuss AI ethics.\") print(insights) 7. AI Safety Dashboard display_ai_safety_dashboard() Purpose : Visualizes AI safety risks using interactive charts. Returns : Tuple containing bar chart, pie chart, scatter plot, and DataFrame. Example : python fig_bar, fig_pie, fig_scatter, risks_df = display_ai_safety_dashboard() fig_bar.show() Next Steps This documentation provides the foundation for integrating FairSense into your workflows. Contact : For inquiries, collaborations, or feedback, connect with Shaina Raza, PhD , at shaina.raza@vectorinstitute.ai . Let me know if you need anything else added! \ud83d\ude0a","title":"Documentation"},{"location":"documentation/#fairsense-api-documentation","text":"","title":"FairSense API Documentation"},{"location":"documentation/#introduction","text":"FairSense is an AI-driven platform for detecting and analyzing bias in textual and visual content. This document outlines the key functions and APIs provided by FairSense for integration and usage. We are releasing a multimodal bias detection toolkit","title":"Introduction"},{"location":"documentation/#1-core-components","text":"","title":"1. Core Components"},{"location":"documentation/#device-setup","text":"Sets up the device for model computation (CPU or GPU). DEVICE = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")","title":"Device Setup"},{"location":"documentation/#model-initialization","text":"Preloads the required models: - Text Model : unsloth/Llama-3.2-1B-Instruct or meta-llama/Llama-3.2-1B-Instruct or any instruct model - Image Captioning Model : Salesforce/blip-image-captioning-large - Summarizer : sshleifer/distilbart-cnn-12-6","title":"Model Initialization"},{"location":"documentation/#2-helper-functions","text":"","title":"2. Helper Functions"},{"location":"documentation/#post_process_responseresponse","text":"Purpose : Cleans and summarizes AI model responses. Parameters : response (str) : The raw response from the AI model. Returns : A cleaned and summarized response string. Example : python processed_response = post_process_response(\"This is the raw response from the model.\") print(processed_response)","title":"post_process_response(response)"},{"location":"documentation/#highlight_biastext-bias_words","text":"Purpose : Highlights specific biased words in the text. Parameters : text (str) : The input text to analyze. bias_words (list) : A list of words to highlight as biased. Returns : HTML-formatted text with highlighted bias words. Example : python highlighted_text = highlight_bias(\"This is a biased statement.\", [\"biased\"]) print(highlighted_text)","title":"highlight_bias(text, bias_words)"},{"location":"documentation/#3-text-analysis","text":"","title":"3. Text Analysis"},{"location":"documentation/#generate_response_with_modelprompt-progressnone","text":"Purpose : Generates a response from the AI model for a given prompt. Parameters : prompt (str) : The input prompt for the model. progress (callable, optional) : Function to track progress. Returns : AI-generated response as a string. Example : python response = generate_response_with_model(\"Analyze this text for bias.\") print(response)","title":"generate_response_with_model(prompt, progress=None)"},{"location":"documentation/#analyze_text_for_biastext_input-progressgrprogress","text":"Purpose : Analyzes a given text for bias and provides a detailed analysis. Parameters : text_input (str) : Text to analyze. progress (gr.Progress) : Progress tracker. Returns : Highlighted text and detailed analysis. Example : python highlighted, analysis = analyze_text_for_bias(\"This text may contain bias.\") print(highlighted) print(analysis)","title":"analyze_text_for_bias(text_input, progress=gr.Progress())"},{"location":"documentation/#4-image-analysis","text":"","title":"4. Image Analysis"},{"location":"documentation/#preprocess_imageimage","text":"Purpose : Converts images to grayscale and applies thresholding for OCR. Parameters : image (PIL.Image) : The input image. Returns : A preprocessed image for OCR. Example : python from PIL import Image image = Image.open(\"example.jpg\") preprocessed = preprocess_image(image) preprocessed.show()","title":"preprocess_image(image)"},{"location":"documentation/#analyze_image_for_biasimage-progressgrprogress","text":"Purpose : Analyzes an image for bias by extracting text and generating captions. Parameters : image (PIL.Image) : The input image. progress (gr.Progress) : Progress tracker. Returns : Highlighted captions and detailed analysis. Example : python image = Image.open(\"example.jpg\") highlighted, analysis = analyze_image_for_bias(image) print(highlighted) print(analysis)","title":"analyze_image_for_bias(image, progress=gr.Progress())"},{"location":"documentation/#5-batch-processing","text":"","title":"5. Batch Processing"},{"location":"documentation/#analyze_text_csvfile-output_filenameanalysis_resultscsv","text":"Purpose : Analyzes a CSV file of text entries for bias. Parameters : file (File) : CSV file with text data. output_filename (str) : Name of the output CSV file. Returns : An HTML table with analysis results. Example : python html_table = analyze_text_csv(\"data.csv\") print(html_table)","title":"analyze_text_csv(file, output_filename=\"analysis_results.csv\")"},{"location":"documentation/#analyze_images_batchimages-output_filenameimage_analysis_resultscsv","text":"Purpose : Analyzes multiple images for bias. Parameters : images (list) : List of image paths. output_filename (str) : Name of the output file. Returns : HTML table with analysis results and image previews. Example : python results = analyze_images_batch([\"image1.jpg\", \"image2.png\"]) print(results)","title":"analyze_images_batch(images, output_filename=\"image_analysis_results.csv\")"},{"location":"documentation/#save_results_to_csvdf-filenameresultscsv","text":"Purpose : Saves analysis results to a CSV file. Parameters : df (pandas.DataFrame) : DataFrame containing results. filename (str) : Name of the output file. Returns : Path to the saved file. Example : python results_df = pd.DataFrame([{\"text\": \"example\", \"analysis\": \"unbiased\"}]) save_path = save_results_to_csv(results_df, \"output.csv\") print(save_path)","title":"save_results_to_csv(df, filename=\"results.csv\")"},{"location":"documentation/#6-ai-governance","text":"","title":"6. AI Governance"},{"location":"documentation/#ai_governance_responseprompt-progressnone","text":"Purpose : Provides insights into AI governance and safety. Parameters : prompt (str) : Topic or question about AI governance. progress (callable, optional) : Progress tracker. Returns : AI-generated insights and recommendations. Example : python insights = ai_governance_response(\"Discuss AI ethics.\") print(insights)","title":"ai_governance_response(prompt, progress=None)"},{"location":"documentation/#7-ai-safety-dashboard","text":"","title":"7. AI Safety Dashboard"},{"location":"documentation/#display_ai_safety_dashboard","text":"Purpose : Visualizes AI safety risks using interactive charts. Returns : Tuple containing bar chart, pie chart, scatter plot, and DataFrame. Example : python fig_bar, fig_pie, fig_scatter, risks_df = display_ai_safety_dashboard() fig_bar.show()","title":"display_ai_safety_dashboard()"},{"location":"documentation/#next-steps","text":"This documentation provides the foundation for integrating FairSense into your workflows. Contact : For inquiries, collaborations, or feedback, connect with Shaina Raza, PhD , at shaina.raza@vectorinstitute.ai . Let me know if you need anything else added! \ud83d\ude0a","title":"Next Steps"}]} \ No newline at end of file