Agent, Agent Capability, and agent_in #637
Replies: 9 comments 22 replies
-
I definitely agree @CarterBeauBenson , and I want to tag @avsculley here. I think we should think through the full design pattern here in order to clarify how this term fits into the picture. Here are some initial thoughts and one potential start: Thinking about what you've written Carter, we seem to now have two ways of thinking about agency in CCO: a first that is weaker and a second that is stronger.
These two ways may not be opposed but complimentary. Agent Capability seems to be about those dispositions that allow their bearers to form and 'hold' Directive ICEs. Subclasses of this might be 'Planning Capability' or 'Task Allocation Capability' or even 'Tool Use Capability' (I have been working on an experimental ontology of such capabilities here.). In addition, it may be useful to have the class 'Agency' to capture (1) (since Agent Capability is tied to (2) already). Agency might be a Realizable Entity that is realized in processes where its bearer is an agent in that process, and then agent in remains as is. To align with this view, we could modify the definition of Act such that it becomes A Process that has_agent some Material Entity, and a Planned Act becomes An Act that realizes some Agent Capability. We would need to then better define 'Agent Capability' in terms of how Material Entities hold Directive ICEs. Thoughts? |
Beta Was this translation helpful? Give feedback.
-
Agent is a Material Entity that is agent in some planned process that realizes some skill or ability? This might make the bar for agency too high. Either AI is not agential or we anthropomorphize AI. First-degree murder: First-degree murder seems to be a planned act. Pinging @johnbeve, @mark-jensen, @neilotte, @APCox |
Beta Was this translation helpful? Give feedback.
-
@CarterBeauBenson, are those definitions from a legal source? I find the definition of Second-degree murder puzzling. If you kill someone, premeditated or not, there certainly seems to be intent to cause serious bodily harm. As for planning, I suppose the pertinent question is: how far in advance do you have to plan? I'm not talking about the legal definition, but rather about whether it's useful, from a modeling standpoint, to create a Directive Information Content Entity that prescribes the Act, thereby making it a Planned Act. Once upon a time, a Information Content Entity was supposed to generically depend on an Information Bearing Entity. I submit that, for many acts of First-degree murder, the murderer is unlikely to express their thoughts in any way that could be considered an Independent Continuant. |
Beta Was this translation helpful? Give feedback.
-
@CarterBeauBenson that may make 'Agent' a high bar. If we go along with a stronger and weaker sense of agency, we might have an Agent simply be an agent_in some pracess, but a subclass of Agent might be an Intentional Agent that is an agent_in a Planned (or as we decide, Intentional) process? |
Beta Was this translation helpful? Give feedback.
-
@CarterBeauBenson I'm going to let others better versed in BFO foundations than I am decide whether a brain can be an IBE. I will admit that, if a brain is an IBE, it affirmatively answers a question I once posed: is there an IBE that's not an IBA? |
Beta Was this translation helpful? Give feedback.
-
Seems to me intended acts can not be realized. I intend to go to the grocery store but have an accident on the way, for example. |
Beta Was this translation helpful? Give feedback.
-
These are great questions, Carter--thanks for raising them. I regret that I didn't see them when they were first posted. A few thoughts occasioned by some of your questions: Re 1: What of value in the present definition would be lost if the definition were replaced by something like this? (Def) A Realizable Entity that is realized by a Planned Act performed by this Realizable Entity's bearer. I think there's no even apparent circularity in (Def). Admittedly "performed" is left unanalyzed in (Def). Definitions have to bottom out somewhere, though, and the present Agent Capability definition is clearly intended to imply, though as you point out it doesn't seem to actually imply, that a Material Entity must perform a Planned Act in order to count as realizing an Agent Capability that inheres in it. Re 5: Insisting explicitly, as in (Def), that the Planned Acts that realize an Agent Capability must be things performed by the Agent Capability's bearer would rule out cases like the one you mention. Re 6: In another discussion thread I noted out that nearly every Realizable Entity definition in CCO defines the entity in question EITHER as a RE that inheres in its bearer in virtue of xyz OR as a RE that is realized by processes of such-and-such sort. The current Agent Capability definition is sort of a hybrid of these two definition types: It defines an Agent Capability as one that inheres in its bearer in virtue of a certain fact, but this fact is about (among other things) what its realizations are. It's tempting to think that "in-virtue-of" picks out a "because" or sourcehood relation. But on reflection in these definitions that might not always be what's intended. The present definition, for example, is as follows: Agent Capability = A Realizable Entity that inheres in a Material Entity in virtue of that Material Entity's capacity to realize it in some Planned Act. If you read "in virtue of" as "because," then you get: Agent Capability = A Realizable Entity that inheres in a Material Entity because this Material Entity is capable of realizing this Realizable Entity in some Planned Act. But I doubt any Realizable Entity inheres in an entity because this entity can realize this RE. Compare: fragility1 inheres in glass1 because glass1 can realize fragility1. That sounds really bad to me. Not exactly circular--just getting the order of explanation wrong. If glass1 can realize fragility1 then this is surely partly because fragility1 inheres in glass1. Your suggestion that "in virtue of" is really meant to pick out something like the realization conditions for the RE in question is intriguing. I'd like to sample other RE definitions that employ the same language to see how often it might be used that way. |
Beta Was this translation helpful? Give feedback.
-
Causality seems to me to be impossible to define clearly. What participant in a process is not causally involved? No participant can be ignored. Without any of them there is not that process. The best we have in BFO, IMO, are realizable entities. A participant is causally involved because they are realizing a realizable entity. Agent in tells me (maybe) who to blame. But even that is dubious given the uncertainty about how far back in the causal chain to really determine that. Intention and planning seems like a much stronger basis for resting definitions on. Maybe intention less so. Intention seems like a kind of a attitude towards a plan concretized as a realizable entity. I think we have a pretty good intuitive sense of what planning is. e.g. there is a recipe as written down, and a recipe as remembered - so it can be copied from memory. But there's also potentially a manifestation of the recipe that is actually doing what is said. What we have that lets us do what the recipe is a realizable. It's something that is manifested in a correlated process. Me intending to make the recipe adds is a choice that I will try to realize the realizable entity. Maybe the realizable entity changes type to "strengthen" it. agent_in. I would have it go away in favor saying instead what realizable is realized by what. Work on classifying realizables. Maybe it will become apparent which when bearers of realizables are agents, but I have the sense that that judgement will be socially defined. I'm not sure cross-culture that judgements of agency will be the same. So better from the bottom up. |
Beta Was this translation helpful? Give feedback.
-
Does the play realize the acting role or does the act of acting? Assume its the act of acting I see that "realizes" uses "participates" in its definition, so we might get the second triple for free, but I am not seeing anything in the axioms that we could infer that from. Could there be an abbreviated pattern in which can be the shortcut of the above since it is sometimes unclear what roles or dispositions are being realized by a process, but it might still be useful to capture the performer. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
If I remember right, it used to be that CCO:Agent was a defined class such that any Material Object that has the agent_in object property associated with it could also be counted as an agent. For example:
person123 agent_in plannedAct123
meant that person123 is an agent. To me, this meant that person123 had causative power for (from the agent_in property), what would often be, planned acts. Since the person had causative power in a planned act, it is likely that they could, or had the ability to, carry out planned acts. I admit that there was work needed here to specify that a rock could not be an agent in the breaking of a Tesla windshield, since in some sense, it causes the windshield to break. The problem is that the implication here was not explicit. The nice thing about this, however, was that we did not have to also assert person123 as an agent.
Now Agents are defined as "a Material Object that is the bearer of some Agent Capability." This means that in order to get the same sort of inference as the above, I need to specify that
person123 is_bearer_of agentCapability123
Although there is nothing wrong here, it does mean that one has to add an Agent Capability and it not get the free typing when they use "agent_in".
Additionally, "agent_in" does not have a restriction in the domain limiting the sorts of things that can be agents any more than the sorts of things that can participate. For example, Sites can participate, therefore Sites can be the agent_in an action. ICEs can participate so ICEs can be agent_in. This seems to violate the spirit of agency in CCO, but does not violate any laws, since there, to my eyes, is no semantic connection between Agent and agent_in.
Below is the definition of Agent Capability:
Agent Capability = A Realizable Entity that inheres in a Material Entity in virtue of that Material Entity's capacity to realize it in some Planned Act.
Several things should be noted about the quality of this definition.
Agent Capability = A Realizable Entity that inheres in a Material Entity in virtue of that Material Entity's capacity to realize an Agent Capability in some Planned Act.
On my view, it should be implied that a MO has an agent capability if it has realized that capability through the proper use of the object property. Basically, I think causal power in a planned act that has occurred should be at the forefront and the potential to have causal power be in the background. Why is this piece of software an agent? Because it performed an intentional act. Why is this person an agent? Because it performed an intentional act. Why is the Tesla rock not an agent, because it did not perform an intentional act. Then we infer that the person and software have the capability (or some other thought out class) and that the rock does not.
I would like to see three things if possible.
Beta Was this translation helpful? Give feedback.
All reactions