Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[META] ML Inference Processor Enhancements I #2839

Closed
3 tasks done
mingshl opened this issue Aug 20, 2024 · 2 comments
Closed
3 tasks done

[META] ML Inference Processor Enhancements I #2839

mingshl opened this issue Aug 20, 2024 · 2 comments
Assignees
Labels
2.17 enhancement New feature or request v2.17.0

Comments

@mingshl
Copy link
Collaborator

mingshl commented Aug 20, 2024

Is your feature request related to a problem?

Search response processor:

  • support one to one inference in ml inference search response processor
  • support list in substring during prediction API, so it can support GenAI/RAG use case in ml inference search response with prompt defined in connector level
  • support custom prompt in model config, currently it will cause escape problem, due to model Input change in this issue: [BUG] model_config prefix is changed in payload for ML inference processors  #2822

What solution would you like?
A clear and concise description of what you want to happen.

What alternatives have you considered?
A clear and concise description of any alternative solutions or features you've considered.

Do you have any additional context?
Add any other context or screenshots about the feature request here.

@mingshl mingshl added enhancement New feature or request untriaged labels Aug 20, 2024
@mingshl
Copy link
Collaborator Author

mingshl commented Aug 20, 2024

@opensearch-project/admin -- Can you please move this to our roadmap https:/orgs/opensearch-project/projects/1 under 2.17?

@mingshl
Copy link
Collaborator Author

mingshl commented Sep 3, 2024

support one to one inference in ml inference search response processor in #2801

support list in substring during prediction API, so it can support GenAI/RAG use case in ml inference search response with prompt defined in connector level in #2871

adding a to_string() method in httpConnector to support custom prompt in #2871

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2.17 enhancement New feature or request v2.17.0
Projects
Development

No branches or pull requests

2 participants