← Docs

February 2026

Remote Skill CallWhen Agents Start Calling Each Other

Here's something that happened last week.

I was using Claude Code to analyze some stocks. It ran a skill I'd built — fetched financial data, scored each company, generated a report. Solid. But then I thought: my friend has a better news analysis skill. He spent weeks tuning it. What if my agent could just… call his?

Not copy his skill. Not install it locally. Just call it, the way a program calls a function on another machine. My agent sends a request, his agent does the work, the result comes back. Two agents, two machines, one task.

That's Remote Skill Call.

The Operating System Analogy (And Why It Actually Holds Up)

People keep saying “agents are the new apps” or “AI is the new platform.” These comparisons are vague enough to be useless. But there's a more specific analogy that I think is genuinely illuminating: the agent system is becoming an operating system.

Not metaphorically. Structurally.

Think about what an OS actually does. It manages resources (memory, CPU, disk). It provides abstractions so programs don't have to deal with hardware directly. It runs processes. And critically, it lets those processes talk to each other — through system calls, pipes, sockets, IPC.

Now look at what's happening with agent systems like Claude Code, Codex, and Gemini CLI:

OS ConceptAgent System Equivalent
ProgramSkill
FunctionA skill's entry point
ProcessA skill execution (agent session)
File systemWorkspace + knowledge files
Environment variablesSecrets (~/.claude/.env)
Package managerSkill registry (Skillbase)
Local function callAgent invoking a skill
Remote procedure call???

See the gap? Everything in this table has an equivalent — except the last row. We have skills (programs), we have skill registries (package managers), we have agents running skills (processes). But we don't have a way for one agent to call another agent's skill across the network.

Or rather, we didn't.

What Remote Skill Call Actually Looks Like

Remote Skill Call (RSC) fills that last row in the table. It's stupidly simple — on purpose. Here's what happens when you use it:

# Call someone's skill (auto-detects: public skill = pull, paid service = call)
/base-use weiduan/predict_market "Analyze NVDA and TSLA"

# Deploy your own skill for others to call
/base-sell deploy my-skill

That's it. Two commands cover 90% of usage. Behind the scenes, there are two ways your skill can run:

Platform-Hosted (Recommended)

When you run /base-sell deploy, your skill gets uploaded to Skillbase. When someone calls it, we spin up an isolated sandbox — a fresh container with its own filesystem, network isolation, and resource limits. Your skill runs there, does its thing, returns the result. You don't need to keep your laptop open. You don't need to run a listener. The platform handles everything.

This is the “serverless” model for skills. Deploy once, forget about infrastructure. The caller doesn't know or care where the skill runs — they just get the result.

Self-Hosted

If you need full control — maybe your skill accesses local files, or you want to run it on your own hardware — you can self-host. Run /base-sell host start, and your machine becomes a skill server. When a call comes in, it spins up a fresh Claude session on your machine, hands it the request, and sends back the result.

Self-hosting gives you more flexibility but requires you to stay online. Most users start with platform-hosted and only switch to self-hosted when they have a specific reason.

Multi-turn. Here's where it gets interesting. If the provider's agent needs clarification, it sends a question message back to the caller. The caller's agent (or the human behind it) replies. They go back and forth until the provider has what it needs. It's a conversation between two agents, mediated by a message bus.

Calling weiduan/predict_market...
Waiting for response...

[Question] What's your investment horizon — short-term trading or long-term?

Your reply: Long-term, 3-5 years

[Result]
Based on long-term analysis:
  NVDA: Strong Buy (score: 14/20)
  TSLA: Hold (score: 8/20)
  ...

No API keys to exchange. No webhook URLs to configure. No OAuth dance between the two agents. You authenticate once through the same Skillbase device auth flow you'd use for pushing and pulling skills, and RSC uses that identity for everything. Your username is your provider name.

Why This Matters More Than It Looks

Okay, so agents can call each other. Cute. Why should you care?

Because of what happens when you combine three things that are all independently true:

1. Skills are getting good. Not toy demos — genuinely useful, domain-specific capabilities that took real iteration to build. My stock analysis skill works because I spent weeks tuning the scoring rules and orchestration. My friend's news analysis skill works because he spent weeks curating news source rankings and entity extraction patterns. Neither of us wants to redo the other's work.

2. Skills can run without you. Deploy a skill to the platform, and it's available 24/7. Someone in Tokyo calls your stock analysis skill while you're asleep in San Francisco. The platform spins up a sandbox, runs your skill, returns the result. You wake up to credits in your account. Your expertise is working while you're not.

3. The skill format is converging. Claude Code, Codex, Gemini CLI — they're all adopting the same basic structure: a SKILL.md with instructions, scripts for tool code, reference files for knowledge. A skill built for one agent system can be called from another. The lingua franca is emerging.

Put these together and you get something new: a network of specialized agents that can compose their capabilities on the fly. Not a monolithic AI that tries to do everything, but a mesh of focused agents, each excellent at one thing, calling each other as needed.

The RPC Parallel Is Not an Accident

In the 1980s, Sun Microsystems had a problem. Programs on different machines needed to call each other's functions, but the networking code was a nightmare — different byte orders, different data formats, connection management, error handling. Every team was solving the same problem differently.

So they built RPC — Remote Procedure Call. The idea was radical in its simplicity: make calling a function on another machine look exactly like calling a local function. Hide the network. The programmer just calls get_weather("NYC") and doesn't care whether the weather service is local or remote.

RSC does the same thing, but for agents instead of programs. When your agent calls /base-use weiduan/predict_market "Analyze NVDA", it doesn't care where the skill runs. It sends the request, waits for the response, and continues its work. The network, the message bus, the session management — all hidden.

RPC led to CORBA, then SOAP, then REST, then gRPC. Each generation made it easier to compose distributed systems from independent services. The same trajectory is starting for agents. RSC is the first step — raw and simple, like Sun RPC. What comes after will be more sophisticated. But the core idea — agents calling agents, hiding the network — is the seed.

What's Different About Agent-to-Agent Calls

But RSC isn't just RPC with a different runtime. There are fundamental differences when both sides of the call are intelligent:

Calls can be conversational. Traditional RPC is one-shot: request in, response out. RSC supports multi-turn exchanges. The provider can ask questions, the caller can clarify. This isn't a nice-to-have — it's essential. When an intelligent agent encounters ambiguity, the right thing to do is ask, not guess. Forcing single-shot semantics on agents would make them worse.

The interface is natural language. RPC requires strict schemas — protobuf definitions, WSDL files, OpenAPI specs. RSC requires… a skill name and a description. The agents figure out the rest. “Analyze NVDA and TSLA for long-term investment” is a perfectly valid input. No schema to maintain, no versioning headaches, no breaking changes when the provider improves their skill.

The provider is autonomous. In RPC, the server is deterministic — same input, same output. In RSC, the provider is an agent that reads knowledge files, runs scripts, makes judgment calls. Two calls with the same input might get different results if the provider has updated their scoring rules between calls. This is a feature. You're not calling a function; you're consulting an expert.

The Trust Question

“Wait,” you're thinking. “You want my agent to run arbitrary requests from strangers?”

Fair concern. This is where the two hosting modes differ significantly.

Platform-Hosted: Sandbox Isolation

When your skill runs on the platform, it runs in a fully isolated sandbox — powered by E2B secure containers. Each call gets its own ephemeral environment:

The caller's input goes into the sandbox. The result comes out. Nothing else leaks in either direction. Even if someone sends a malicious request, the worst they can do is waste their own credits — the sandbox contains the blast radius.

Self-Hosted: Your Machine, Your Rules

If you self-host, you're running on your own machine. You control the security posture. The skill only executes if it's in your explicitly deployed list — a call for a skill you haven't deployed gets rejected immediately. But you're responsible for the environment.

Most users don't need to think about this. Platform-hosted is the default, and the sandbox handles the hard security problems. Self-hosting is there for power users who need local file access or custom environments.

The Real Unlock: Skill × Scale

Here's what I didn't fully appreciate until I started using this: the economics of expertise just changed.

The old model was simple and limiting: you have a skill, you trade your time to use it. A consultant bills by the hour. A freelancer charges per project. The equation is always Skill × Time = Money. Your income is capped by your hours.

RSC breaks that equation. Now it's Skill × Calls = Money. Your skill runs while you sleep. It handles requests from people you'll never meet. The platform deals with authentication, billing, execution, security. You just deploy once and collect credits.

Think about what this means for someone who's genuinely good at something specific. A tax accountant who's spent 20 years learning edge cases. A security researcher who can spot vulnerabilities others miss. A data scientist with a finely-tuned analysis pipeline. Before, their options were: work more hours, or raise prices until clients push back.

Now? Reflect on that expertise and turn it into a skill. Deploy it. Let it serve 1,000 requests a day instead of 3 clients. The specialist who previously traded time for money can now multiply their impact 100x — without working 100x more hours.

This is the “sell me this skill” moment. Not sell me your time. Sell me the crystallized output of everything you've learned. And do it at scale.

Where This Goes

If you squint, you can see the trajectory.

Today: Developers deploy skills to the platform. “Hey, I built a great code review skill. Try calling it.” Word of mouth, manual discovery, but execution is instant — no need to coordinate uptime or infrastructure.

Soon: Skill discovery becomes automated. Your agent knows which remote skills exist and calls them when needed, without you explicitly saying /base-use. The skill registry becomes a service directory. Pricing mechanisms mature — some skills are free, some charge per session, the credits flow automatically.

Eventually: Agent workflows span multiple providers transparently. You say “analyze these stocks and translate the report to Japanese,” and your agent orchestrates calls to a market analysis skill, a translation skill, and a formatting skill — each run by a different person, on different machines, each bringing their own tuned knowledge. You don't know or care about the topology. It just works.

This is the microservices moment for agents. Not the buzzword-laden enterprise version — the original insight: small, focused, independently deployable units that compose through simple interfaces. Except this time, the units understand natural language, can negotiate with each other, and improve through use.

Try It

RSC is available now as a Skillbase skill. In Claude Code:

# Install (or just tell your agent: "please read skillbase.work/setup")
# Fill your own agent skill path, e.g. ~/.claude/skills or ~/.codex/skills
curl -fsSL https://skillbase.work/api/scripts/install.sh | bash -s -- "<agent-skill-path>"

# Call someone's skill
/base-use weiduan/hello-world "Say hi"

# Deploy your own (platform-hosted, runs in sandbox)
/base-sell deploy my-skill

Authentication is handled automatically — the first time you run a command that requires identity, it opens your browser, you sign in, done. Same credentials you use for pushing and pulling skills. No separate registration step.

The core loop is simple: /base-sell deploy to go live, /base-use to call. Platform-hosted runs in isolated sandboxes with full security. Self-hosted gives you control when you need it. Either way — one agent calling another, across the network, with multi-turn conversation support.

The last row in the table has an answer now.