ποΈ A Layered Meta-AI
Cognyn separates reasoning from execution. A planning core turns goals into structured workflows, while an orchestration layer executes those plans through connectors to external AIs and apps.
- Reasoning Core β parses goals, infers constraints, proposes multi-step plans.
- Orchestration Engine β runs the plan step-by-step, passing context and results.
- Connector Registry β tool definitions (capabilities, inputs/outputs, cost/credits).
- Evaluators β lightweight checks and review gates that improve quality.
- Privacy Controls β BYO keys, minimal context, transparent data flow.
π§ Reasoning Core
The core turns natural-language goals into structured plans. It extracts intent, domains, constraints, then proposes steps and suggested tools.
Inputs
- Goal statement & context
- Constraints (tone, audience, deadlines)
- User preferences (preferred tools)
Outputs
- Ordered steps with tool candidates
- Per-step inputs/outputs schema
- Estimated credits & effort
Signals
- Heuristics (length, style, brand fit)
- Historical plan success (non-PII)
- Evaluator feedback
π Orchestration Engine
The engine executes plans step-by-step. Each completed step feeds the next with the right context (prompts, attachments, URLs, metadata).
- Dispatch β select provider per step (e.g., OpenAI, Midjourney, Runway).
- Context Build β compile prompts and assets with constraints.
- Call β perform the API request, track usage and status.
- Validate β run evaluators; retry or branch if needed.
- Handoff β pass outputs to the next step or to the user.
Execution is transparent: users can inspect steps, swap providers, or insert approvals.
π§© Connector Registry
Connectors define each toolβs capabilities and I/O schema so the engine can call them safely and consistently.
Schema
- Capabilities (text, image, video, email, CMS)
- Inputs/Outputs JSON contracts
- Cost/credit weights & limits
Examples
- LLM (copy, planning, summaries)
- Image gen (prompt β asset URLs)
- Video gen/edit (script β clip)
- Email (subject/body β campaign)
- CMS (blocks/assets β page)
Flexibility
- Swap vendors per step
- Enforce tool policies
- Template reusable chains
β Evaluators & Quality Gates
Lightweight evaluators catch common issues and trigger micro-iterations before results move forward.
- Style & Tone β brand voice, reading level
- Completeness β required sections present
- Safety β content checks on prompts/outputs
- Image/Video Hints β composition, motion, duration
Evaluators can be automated or human-in-the-loop (approval steps).
π Data & Privacy by Design
Cognyn is orchestration-first and user-owned. You control your keys and data. We send only the minimum context required to the tools you select.
- Bring-your-own API keys
- Transparent step logs (what ran, why, where)
- No training on your data without explicit opt-in
- Scoped context per tool; no cross-vendor spillover
See the Privacy Policy for details.
π Credits & Cost Awareness
The beta shows an Estimated Credits field as a proxy for complexity and third-party usage. In orchestration mode, credits map to actual usage: tokens, renders, API calls.
- Per-step credit weights (e.g., video > email)
- Projected cost before execution
- Run logs with usage breakdowns
π‘οΈ Reliability & Known Limits
- Provider Variability β outputs vary by model; evaluators reduce re-runs.
- Rate Limits β connectors respect vendor quotas; planner schedules retries.
- Asset Handling β binary assets (images/video) stored as URLs with metadata.
- Human Review β optional gates for accuracy, compliance, brand voice.
π£οΈ Roadmap
- Live Orchestration β execute plans across real connectors.
- Editable Pipelines β drag steps, add gates, swap vendors.
- Team Mode β shared workspaces, roles, audit trails.
- Developer API β register tools, evaluators, and templates.
- Private Data Mode β stricter scoping and enterprise controls.
Want to build on Cognynβs architecture?
Cognyn β The Mind That Connects Minds.