How to test no-code agents on Maxim?
Test AI agents using no-code agents that are configured and stored on the Maxim platform. This approach leverages the With Prompt Chain Version ID function to execute pre-configured no-code agents without needing to implement custom logic.
Basic No-code Agent Testing
Use the With Prompt Chain Version ID function to test agents configured as no-code agents on the Maxim platform:
from maxim import Maxim
# Initialize Maxim SDK
maxim = Maxim({"api_key": "your-api-key"})
# Create and run the test using a Maxim no-code agent
result = (
maxim.create_test_run(
name="No-code Agent Test", in_workspace_id="your-workspace-id"
)
.with_data_structure(
{
"input": "INPUT",
"expected_output": "EXPECTED_OUTPUT",
}
)
.with_data("your-dataset-id")
.with_evaluators("Bias")
.with_prompt_chain_version_id("your-prompt-chain-version-id")
.run()
)
print(f"Test run completed! View results: {result.test_run_result.link}")
Using Presets
If you have saved test configuration presets on the Maxim platform, you can use them to automatically load datasets, evaluators, and other settings without specifying them manually:
from maxim import Maxim
maxim = Maxim({"api_key": "your-api-key"})
# Test using a saved preset — dataset, evaluators, and context
# settings are loaded from the preset automatically
result = (
maxim.create_test_run(
name="No-code Agent Test with Preset",
in_workspace_id="your-workspace-id",
)
.with_prompt_chain_version_id("your-prompt-chain-version-id")
.with_preset("My Saved Preset")
.run()
)
print(f"Test completed! View results: {result.test_run_result.link}")
Any values you set explicitly via other builder methods will take priority over the preset defaults:
result = (
maxim.create_test_run(
name="No-code Agent Test with Preset Override",
in_workspace_id="your-workspace-id",
)
.with_prompt_chain_version_id("your-prompt-chain-version-id")
.with_preset("My Saved Preset")
.with_data("different-dataset-id") # Overrides the preset's dataset
.run()
)
If the preset includes human evaluators, you must also call with_human_evaluation_config to provide the reviewer emails.
Next Steps
For complex multi-agent endpoints, consider using the Maxim observability features to trace individual steps and
debug no-code agent execution.