Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add post on AI review tutorials #331

Merged
merged 9 commits into from
Nov 18, 2024
Merged

Add post on AI review tutorials #331

merged 9 commits into from
Nov 18, 2024

Conversation

adamkucharski
Copy link
Member

This PR adds a blog post describing options for use of LLMs to review open access training materials 'in character'.

Right before merging:

  • The date field has been updated
  • All reviewers have been acknowledged in a short paragraph
  • A PR has been opened in the blueprints to link to this post
  • The post has been re-rendered and content of the _freeze/ folder is up-to-date

Copy link

netlify bot commented Nov 7, 2024

Deploy Preview for tourmaline-marshmallow-241b40 ready!

Name Link
🔨 Latest commit 20e813a
🔍 Latest deploy log https://app.netlify.com/sites/tourmaline-marshmallow-241b40/deploys/6736a073b8eb990008d02cfd
😎 Deploy Preview https://deploy-preview-331--tourmaline-marshmallow-241b40.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link
Member

@chartgerink chartgerink left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the proposed blog! 🙌

I suspect this won't take much of your time to edit. Main priority is to clearly denote what text is LLM generated, as that is now not clear enough. Also, the blog talks about "we" which makes it seem as if we collectively as community have decided this. I encourage revising this to denote what you as an author are doing/have done.

posts/ai-learner-review/index.qmd Outdated Show resolved Hide resolved
posts/ai-learner-review/index.qmd Show resolved Hide resolved
posts/ai-learner-review/index.qmd Show resolved Hide resolved
posts/ai-learner-review/index.qmd Show resolved Hide resolved

A challenge with LLMs trained for general use is finding domain-specific tasks where they can add sufficient value beyond existing human input. Tasks like providing early sense checking and tailored feedback, particularly from differing perspectives, therefore has potential to overcome common bottlenecks in developing training materials (e.g. providing initial comments and flagging obvious issues while waiting for more detailed human feedback).

As Epiverse-TRACE training materials continue to develop, we plan to explore further uses beyond simple first-pass reviews. For example, LLMs are well suited to synthesising qualitative feedback, increasing the range of insights that can be collected and summarised from learners as we move into beta testing. We also hope to identify opportunities where LLMs can help supplement the learner experience, as demonstrated by emerging tools like [RTutor](http://rtutor.ai/) for descriptive plotting functionality in R, which combines generation of code in response to user queries with translation into shiny outputs.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
As Epiverse-TRACE training materials continue to develop, we plan to explore further uses beyond simple first-pass reviews. For example, LLMs are well suited to synthesising qualitative feedback, increasing the range of insights that can be collected and summarised from learners as we move into beta testing. We also hope to identify opportunities where LLMs can help supplement the learner experience, as demonstrated by emerging tools like [RTutor](http://rtutor.ai/) for descriptive plotting functionality in R, which combines generation of code in response to user queries with translation into shiny outputs.
As Epiverse-TRACE training materials continue to develop, I plan to explore further uses beyond simple first-pass reviews. For example, LLMs are well suited to synthesising qualitative feedback, increasing the range of insights that can be collected and summarised from learners as we move into beta testing. We also hope to identify opportunities where LLMs can help supplement the learner experience, as demonstrated by emerging tools like [RTutor](http://rtutor.ai/) for descriptive plotting functionality in R, which combines generation of code in response to user queries with translation into shiny outputs.

Using the we makes it seem as if this is an Epiverse TRACE decision, which it is not as far as I know.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We (@avallecam and I) are looking at further LLM use cases (e.g. to support delivery as well as design). Fine to keep as a general 'we' I think, as discussions have been happening across team (and with external collaborators).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. Could you reflect @avallecam as co-author in the YAML? Otherwise it gets confusing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, have now added @avallecam

@adamkucharski
Copy link
Member Author

Thanks for the review - have made some edits that hopefully address.

@adamkucharski
Copy link
Member Author

OK to now merge?

@chartgerink
Copy link
Member

Thanks @adamkucharski 🙏 Happy to merge this.

@chartgerink chartgerink merged commit 9c99f35 into main Nov 18, 2024
9 checks passed
@chartgerink chartgerink deleted the ai-review-tutorials branch November 18, 2024 07:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants