Replies: 33 comments
-
@Mttbnchtt Thanks for the catch on typo.
Not sure I completely understand your concern. I agree it doesn't prescribe the specifics of the inputs and outputs. But, at the minimum, doesn’t an algorithm generically prescribe the number, order, and formatting of inputs and outputs? |
Beta Was this translation helpful? Give feedback.
-
@mark-jensen I think I agree with @Mttbnchtt. Here's a dictionary definition of algorithm (from the OED):
By these lights, CCO's definition seems awfully specific. What is the motivation for referencing mathematical functions? In practice, only a small fraction of algorithms derive from mathematical functions. In particular, mathematical functions are typically considered stateless, whereas algorithms can modify state. One can infer an algorithm's inputs and outputs from its set of operations, but to my mind the operations are the fundamental aspect of the concept. CCO, with its "as well as workflow" description, demotes that aspect. |
Beta Was this translation helpful? Give feedback.
-
Hi @mark-jensen. I still think that an algorithm does not prescribe inputs or outputs. The algorithm is a finite list of rules. These rules will typically be applicable to some inputs and not to some others, but that is not stated in the algorithm. The number of inputs that a function takes is also not necessarily mentioned in an algorithm. We could have a function f(x, y) that takes two input and the algorithm that describes how to compute the result may simply be: copy x on the output slot and halt (there is no mention of the second input). An algorithmically specified function may not have an output (for example, the algorithm may cause the computing agent to loop forever). It is important to notice that an algorithm is not any list of rules. The list must be finite, the rules (individually but not necessarily collectively) must be finitely implementable, and the process must be deterministic. An additional condition that I forgot before (but it is usually required) is that the list must be consistent: i.e. the algorithm cannot ask the agent to do different things in the same situation. Finally, this is not required to solve our issue, but it is better to say that explicitly: in classical mathematics, not every function is algorithmically specifiable (there are so-called constructive views of mathematics in which every function is algorithmically specifiable, but they are not mainstream math). |
Beta Was this translation helpful? Give feedback.
-
Thanks @Mttbnchtt @swartik for your feedback. I agree that use of mathematical function seems too restrictive. Let's work here to improve the definition. Note: the CCO term in its current state derives from IAO. A search of their tracker finds several stale issues working to refine their definition with no results. Moreover, there is a clear history of struggle amongst experts in computational theory and philosophy to reach agreement on a universal definition for 'algorithm'. Thus, we should be modest in our goal and aim for stepwise improvement. First draft at revision: Please offer suggestions for improvement |
Beta Was this translation helpful? Give feedback.
-
Hi @mark-jensen. I think the draft is good. I have three observations:
I am not familiar with the selected vocabulary of CCOs. Is there a way to re-state the following definitions within the CCOs: |
Beta Was this translation helpful? Give feedback.
-
@Mttbnchtt, CCO defines "Directive Information Content Entity" as the superclass of Algorithm and Objective, among other classes. Its definition is:
@mark-jensen, I'm trying to think of how to rewrite the definition to use as many CCO terms as possible. Unfortunately, CCO doesn't include either "instruction" or "operation". In the sense you intend, they are themselves Directive Information Content Entities. I think "deterministic" is too strict. "Sort the array" is a legitimate step in an algorithm, but it doesn't tell me whether to use a stable sorting algorithm. So with those concerns in mind, I propose the following revision:
This should be accompanied by an elucidation explaining what "finite sequence of operations" means. If you agree I'll be happy to work on the elucidation. (For the record, I'm still pondering whether I prefer "operation" or "instruction".) |
Beta Was this translation helpful? Give feedback.
-
Thanks @swartik for the clarification. About the definition of algorithm, there are two major problems the revision that is proposed: "A Directive Information Content Entity that prescribes a finite sequence of operations that can be automated to achieve an Objective".
|
Beta Was this translation helpful? Give feedback.
-
I think agree with you about "operation" vs. "instruction". However, @mark-jensen proposed "operation", and I would like him to weigh in so I can understand his motivation before I commit. As for whether repeatedly adding 1 doesn't achieve an objective, I'm not so sure. You are right that it has no output. I could therefore say that your algorithm achieves the objective of demonstrating that an algorithm need not have an output. You as the algorithm's creator must have had an objective in mind when you stated the algorithm. That objective has nothing to do with the instructions' effects. Alternately, CCO might argue, and quite reasonably I think, that "algorithm" in its context is limited to instructions combined to achieve an objective. Of course, that would have to be stated clearly and explicitly. |
Beta Was this translation helpful? Give feedback.
-
Thanks @swartik. I see that could be reasonable here to concentrate on algorithm given to produce an objective. s for operation vs instruction, I agree that we should hear from @mark-jensen first. |
Beta Was this translation helpful? Give feedback.
-
@swartik @Mttbnchtt Thanks for the thoughtful feedback. Apologies for delay in response. A Directive ICE is (in a sense) an instruction, or composed of instructions, or commands. What it prescribes are operations, i.e., processes. We can drop use of 'operations' to make it more clear. The question is do we want to restrict algorithms to prescribe only computer processes? I think so. Iterating:
And yes, having an objective doesn't necessarily imply an output or termination. |
Beta Was this translation helpful? Give feedback.
-
@mark-jensen Your definition definitely improves consistency with CCO. Is Computer Program Execution a proposed class? I can't find it in either the master branch or bfo2020. |
Beta Was this translation helpful? Give feedback.
-
Thanks Mark. Just seen this. I will reread the discussion. Will reply later
today.
…On Fri, Jan 22, 2021 at 5:10 PM swartik ***@***.***> wrote:
@mark-jensen <https://github.com/mark-jensen> Your definition definitely
improves consistency with CCO.
Is Computer Program Execution a proposed class? I can't find it in either
the master branch or bfo2020.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#46 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIND3OLR4NOEQBIZUHUOEMLS3HZUNANCNFSM4H2RL22A>
.
|
Beta Was this translation helpful? Give feedback.
-
Sorry for replying late. I think that Mark's definition of algorithm is ok
now. To be sure I should know the definition of Computer Program Execution though.
If it implies that the execution is bounded by the energy of the universe
or the life of the universe, which may be finite, then I disagree. For
example, given a finite amount of time, there are Turing computable
functions that will require longer to compute. Or even more simply, a
non-terminating algorithm will run forever but it is still an algorithm.
…On Thu, 28 Jan 2021 at 13:07, Matteo Bianchetti ***@***.***> wrote:
Thanks Mark. Just seen this. I will reread the discussion. Will reply
later today.
On Fri, Jan 22, 2021 at 5:10 PM swartik ***@***.***> wrote:
> @mark-jensen <https://github.com/mark-jensen> Your definition definitely
> improves consistency with CCO.
>
> Is Computer Program Execution a proposed class? I can't find it in either
> the master branch or bfo2020.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#46 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIND3OLR4NOEQBIZUHUOEMLS3HZUNANCNFSM4H2RL22A>
> .
>
|
Beta Was this translation helpful? Give feedback.
-
@swartik My mistake.
@Mttbnchtt I think generalizing to a BFO process also would address your concern of an algorithm prescribing a non-terminating process. |
Beta Was this translation helpful? Give feedback.
-
Agree. Thanks Mark.
…On Mon, Mar 15, 2021 at 3:09 PM Mark Jensen ***@***.***> wrote:
@swartik <https://github.com/swartik> My mistake. ComputerProgramExecution
isn't in Mid-level CCO. I am tempted to amend the def. to be more general,
to not restrict only to operations performed by a CPU, but include
potential for other types of processes being prescribed by an algorithm.
cco:Algorithm =def "A Directive Information Content Entity that prescribes
some Process and contains a finite sequence of unambiguous instructions in
order to achieve some Objective.
@Mttbnchtt <https://github.com/Mttbnchtt> I think generalizing to a BFO
process also would address your concern of an algorithm prescribing a
non-terminating process.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#46 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIND3ON3EAF2Q73EPKUGLXDTDZLNPANCNFSM4H2RL22A>
.
|
Beta Was this translation helpful? Give feedback.
-
@alanruttenberg This definition was an early iteration in attempting to improve the original. See comments from last week, currently it is:
We removed the use of 'deterministic' for reasons you describe. The use of 'finite' and 'unambiguous' applies to the instructions themselves, not the particular implementaion or process that carries them out. Admittedly, use of "in order to achieve some Objective" is redundant, and not really needed, as all Directives prescribe in order to achieve an objective. You suggest that the entity that is prescribed is:
Are you suggesting to use 'computation' as a primitive and perhaps only elucidate? Or rather, attempt to define something akin to a I'm amenable to inlcuding something about computation and automation, but think offering a formal def. for computation might lead us down a rabbit hole. Iterating:
Note we have a term in CCO-mid
I think best for our definition of |
Beta Was this translation helpful? Give feedback.
-
This comment is criticism. Later I'll write something constructive.
Quick comments:
Are all those intended? For the most part, they don't seem to me to fit into the common sense of mechanical. ComputerProgramExecution
I'm not sure I agree with the goal "Ideally it would also not rule out other types of computation that organic or quantum systems exhibit" DNA is not an information content entity. It is a different kind of GDC. ICEs are artifactual. In the case of organic or natural quantum systems 'computing', I'm of the opinion that such characterizations are concepts - ways to understand natural phenomena in terms of systems we've built and understand. That doesn't mean they are the same sort of things as the those systems. Quantum computers do computations. Natural quantum processes (pretty much anything that happens) do not all fall under computation and do not execute algorithms, which are human constructed things. Non-intentionally built things do not prescribe. I concur on the goal of being humble in our attempts. To me that means sticking to a narrower definition of a term we understand now, and then building generalizations when and if we understand more. If it's thought that there's some larger sense of algorithm, then define 'computer algorithm' for now and reserve 'algorithm' for when we're smart enough to figure out the generalization. |
Beta Was this translation helpful? Give feedback.
-
I will try to give a comment that hopefully contributes to converge to a solution. There are at least two senses of "algorithm". In one sense, we may accept that some implementational minutiae are not specified (as Alan said). In another sense, given an algorithm and an input, the complexity of the procedure is determined. Therefore, all implementational minutiae are specified as well. It is not the task of an ontology to capture all the senses in which we use a word (otherwise BFO could not take objects to be always material entities). Let us pick one sense of algorithm. I suggest choosing the most studied one in mathematics, i.e. the second one. Given that, Mark's last attempt seems almost ok to me (also, we accept that we are not writing the definitive definition of algorithm, since scholars keep discussing it as one can see reading the entries on computation in the Stanford Encyclopedia of Philosophy). My main worry is that, if indeed realist ontologies bound mechanical processes by the existence span or energy (or similar) of the universe, then it is wrong to say that a computation "can be carried out in some mechanical process" because every mechanical process is going to be finitely bounded. This would make the decimal expansion of the number \pi not computable, despite agreement among mathematicians that it is computable (see e.g. Alan Turing, On Computable Numbers, 1936). If we can tinker the definition so that it does not imply that the implementation of an algorithm be limited by the existence span or energy (or similar features) of the universe, then I think we are done. |
Beta Was this translation helpful? Give feedback.
-
This doesn't follow. The minutiae may vary in the way I mentioned in the earlier message. The only constraint from computational complexity is that the complexity of the implementation of a step is the same (or better) than what it is in the algorithm. As I understand the Turing paper, the distinction being made is between computable and not-computable numbers - and is related to the notion of decidability. The mentions of infinity should be understood as in principle mentions and stand in contrast to the uncomputable numbers, a great example of which is Chaitin’s constant. That number isn't even in principle computable. Turing can't be used as an authoritative voice on whether the universe can accommodate the infinite as even now that is not known. When mathematicians talk about the infinite it is not (usually) in the physical sense and access to it is not given by physics. Rather, it is arrived at by things like limits and induction. As an aside, for an entertaining take on finiteness and numbers check out Greg Egan's story Dark Integers Inclusion of the finiteness criterion doesn't hurt (much) but it doesn't help either because, as far as I know, there's no sensible alternative to algorithm that differs only on the finiteness criterion. So, it doesn't make a difference/isn't suitable as a differentia. There's an implicature that conditions that are mentioned in a definition are meaningful. When they are not, including them is a tax on understanding. My earlier comment raises a number of issues with the definition, only one of which is the use of "finite". I still don't really understand the term mechanical process and I list a number of processes that I'm asking about in order to understand it. Hopefully we can get back to addressing those in the interest of coming up with a suitable definition. You are right that we aren't writing the authoritative definition. To me that means we should create a definition that's narrowly scoped, has an operational way of evaluating whether a given instance of information content entity is an algorithm, and where the meaning of common use of the label doesn't excessively differ from the definition. To be clear, this isn't a personal criticism of the initial efforts to define the term nor an attempt to be overly philosophical. I look at all definitions in the same way - as a practical way of determining whether an instance is in or is outside the class. I test that by thinking of examples that are close to what I think the boundaries of the definition are and then checki with the author as to what they intended. If a definition doesn't let me do that or if parts of the definitions are not germane to that task then I aim to get it fixed. Definitions can't always do all the work. That's why it's a good practice, in such cases, to give examples of usage as well as cases that seem close to the edge but are on the wrong side. |
Beta Was this translation helpful? Give feedback.
-
Alan, in the informal sense, an algorithm does not specify every detail. In the formal sense studied by computability and complexity theory it does. Saying "add n to m" is not an instruction in a formally specified algorithm, even if in more relaxed contexts it could count as such. In a formal setting, "add n to m" would be said to refer to a function (addition) that can be specified by different algorithms (each algorithm giving instructions that leave no room for alternative moves, once the inputs are given). If not, the complexity of a procedure would indeed not be determined by the algorithm, given the inputs. That said, if we want to define the informal notion of algorithm, that is ok with me as long as we clarify the objective. For the second point. I mentioned the computability of the number \pi to indicate that we cannot limit computations to what machines can do in the actual universe. The reason is that we cannot provide a fixed finite upper bound to the resources (time, space, energy, etc.) that a computation may take. If computations were limited by the existence span of the universe, then \pi would be uncomputable, but it is computable. I raised this point because some comment above made me worry that, given that our ontology is a realist one, mentioning mechanical processes may imply a fixed finite upper bound to the resources available to the mechanical process. If that is not the case, we can ignore this observation. There still seems to be some confusion about the role of finiteness in an algorithm. It is essential for each instruction to be finite and for the list of instructions to be finite. This is a necessary condition, not an idle one: otherwise, every real number would be computable. Our definition should either say that explicitly or imply it (implicature is not good enough when capturing a formal notion). Moreover, every computation that provides an output uses only a finite amount of resources. However, there is no fixed finite upper bound to the size of the input, the size of the output, the length of an instruction, the length of the list, the resources used by the computation. That is all we need to consider regarding the issue finite vs infinite in this context. The "in principle" part of the definition is meant to indicate the absence of a fixed finite upper bound of this type. Concerning Alan's issue with mechanical processes, I suppose that Mark is better informed than me. I will just observe that "mechanical" in computability is a term of art meaning, more or less, "in principle can be carried out by a machine" or "in principle can be carried out by a human with no insight or freedom or tools except basic ones such as paper and pencil". It is not the sense of "mechanical" that expressions such as "Newtonian mechanics" evoke. |
Beta Was this translation helpful? Give feedback.
-
Matteo, "add m to n" is the COBOL syntax for incrementing |
Beta Was this translation helpful? Give feedback.
-
@swartik : I was taking "add n to m" to be an English sentence. If in COBOL this string of symbols constraints ach move of every implementer of it, then it can be used in an algorithm (in the formal sense of algorithm), as you say. In English that string of symbol does not have an unambiguous interpretation (my Italian teacher taught me a method that, albeit slightly, is different than what my son's American teacher is teaching him; of course both methods lead to the same result and define the same function). |
Beta Was this translation helpful? Give feedback.
-
While there are, theoretically, algorithms for the individual steps, a given algorithm (formal or informal) typically does not mention them. I'm not sure why my assertion is controversial. Implementation happens at multiple levels in computer systems. There's the level of the computer program, the steps that it is compiled to, the particular implementation of a CPU. Instructions may be directly implemented in hardware, may be specified by microcode, may be coalesced, such as when a CPU does a multiple and add instruction instead of the two step add and multiply, may be reordered, and may even implement an instruction in different ways depending on the state of the machine. I think we've beat this to death. I we can't agree on it at this point then we will have to agree to disagree. There are a host of other issues I've raised regarding the definition and that of mechanical process on which it is based. In the interest of moving forward, perhaps we can turn to those other issues. |
Beta Was this translation helpful? Give feedback.
-
@alanruttenberg : a computer program like a python script is not an algorithm in the sense in which computability and complexity theory consider it. It is an algorithm only in an informal sense (assuming we call formal the mathematically exact sense of algorithm studied in computability and complexity theory). The reason is what you say: a python script does not specify every detail of the behavior of a computer. If we want to define an algorithm as a computer script (where a computer is, for example, what Apple produces), I am ok. However, as I tried to explain, I am considering the mathematical definition of an algorithm (where also a computer is a theoretical concept see, e.g., the infinite tape of Turing machines). I think we have to agree on what we discuss, not so much on the fact that computer programs are not algorithms in the formal sense. My insistence on non-ambiguity is standard in computability theory, not my choice (see. Rogers, Theory of recursive functions $1.1 and Soare, Turing comuability, §1.2). You need it to prove, e.g., Goedel's first incompleteness theorem. I am not competent to discuss your questions about mechanical process as defined in this ontology. I defer to you and Mark, except noting, as above, that "mechanical" in computability theory is not what "mechanical" means in physics. |
Beta Was this translation helpful? Give feedback.
-
I thought of a counterexample for all the definitions offered. Algorithms that employ an oracle. In determining the complexity of such algorithms the oracle can give a 1 step answer to a problem that is unsolvable, i.e. where no implementation is possible. In those cases there's no minutiae to be had nor can a program be written that accomplishes the goal of the algorithm. So, it's back to the drawing board. One approach would be to avoid the problem by using a narrower definition that excludes such algorithms. Having a definition that captures all the senses in which it is used in language is desirable but sometimes not advisable. In the end it's definition that matters. The label and documentation should be written for the purpose of best communicating the class. The other approach is to try to make a definition that works for the oracle case. I'm thinking that a principal difference between an algorithm and a program is the intent in creating it. Algorithms can be used for both both for determining complexity as well as being a guide in writing programs. Programs, for the most part, are written for the purpose of controlling what a computer does. There may be other goals, such writing a program as an entry to a competition where the winner is whoever writes the shortest program that works. |
Beta Was this translation helpful? Give feedback.
-
Thanks, Alan. I think it is a good idea to define algorithm as one does in computability theory and define programs to capture what computer scientists writes (also called algorithms in a less formal sense). I do not think that the oracle computation changes the definition of algorithm though, which, in the formal sense, is still a finite list etc. When one introduces oracle computation in computability theory, all that changes is that instructions of a new type (more or less "ask the oracle") are allowed. There is no change concerning the unambiguity of each instruction. What we should consider is that, in the sense of algorithm used in computability, given the inputs and the oracle, the complexity of the computation is given. |
Beta Was this translation helpful? Give feedback.
-
@Mttbnchtt, Mark's definition proposal was:
An algorithm using an oracle answering, say, whether some program halts can't prescribe the computations to be carried out, because there is no process that can compute whether a program halts. |
Beta Was this translation helpful? Give feedback.
-
I'm leaning toward the idea of, instead of defining algorithm, defining something like "realizable computing algorithm" with a definition that is specific to compute science/programming and is limited to algorithms that can actually can be implemented. Or, it could be slightly more restrictive by not including any algorithms using oracles. A recipe is not a realizable computer algorithm in this sense, nor instructions in a repair manual, though some people might call those algorithms. We could have "algorithm" as an alternative label, if desired. |
Beta Was this translation helpful? Give feedback.
-
Alan, an oracle is not part of an algorithm. It is information stored somewhere. In a Turing machine it is on a tape that the head can read. The tape could contain the decimal expansion of an incomputable number but that is not relevant. We do not require the information on the oracle tape to be generated by a computation or algorithmically: it is given (like an oracle delivering a message from a god, hence the name). The algorithm is something separate from the oracle. The algorithm of a machine with an oracle would still prescribe a computation. The computation could include actions like retrieving information from the oracle. Nothing changes in the (formal) definition of what an algorithm is by allowing actions of this type to be prescribed. I do not oppose suggestions, like yours, to refocus the effort. I will let the CCO owners decide what they want the ontology to contain. |
Beta Was this translation helpful? Give feedback.
-
There is a concrete, incremental improvement in this thread that I have tried to capture under #411 . The rest of this thread I will move into the discussion forum for further refinement of this issue. |
Beta Was this translation helpful? Give feedback.
-
Currently the definition of algorithm is: "A Directive Information Content Entity that prescribes the inputs and output of mathematical functions as well as workflow of execution for achieving an predefined objective." It contains a typo: an predefined --> a predefined.
Anyway, the biggest issue is that, I think, the definition is wrong, at least the part about mathematical functions. An algorithm does not prescribe inputs or outputs (in some sense of "prescribe", that is the job of the definition of a function). An algorithm is a finite list of commands 1) each of which requires only a finite amount of information and resources (energy, time, paper, etc.) to be carried out and such that 2) the entire process of implementing the algorithm on a given input is deterministic (i.e. every agent that follows the same algorithm and begins with the same input carries out the same operations producing, along the way, the same results in the same order). Note that an algorithmically specified procedure may not terminate, i.e. the output may not exist. For a reference, you can see Soare, Turing Computability pp. 3 ff.
Beta Was this translation helpful? Give feedback.
All reactions