Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyEDA API summary #18

Open
AurumTheEnd opened this issue Apr 12, 2024 · 0 comments
Open

PyEDA API summary #18

AurumTheEnd opened this issue Apr 12, 2024 · 0 comments

Comments

@AurumTheEnd
Copy link
Contributor

PyEDA supports the following API:

  • Expression in expr inherits from Function in boolfunc.

[Trait] Variable:

  • Every variable has a list of names and indices. These are "hierarchical", like a namespace declaration. Not sure if we want this, but it seems like it could be a good compromise compared to just using strings?
  • Also, every variable has a unique ID on top of this that actually identifies it across data structures.
  • Variables are ordered implicitly, first by index, then by name (not uniqueid apparently?).
  • var.name and var.qualname | We certainly want names... we might skip qualnames. Up for debate.
  • Different representations have different "variable" objects.

[Trait] Function:

  • fun.support

    • (Unordered) Set of variables. This is probably just syntactic.
    • Additionally, we might want to be able to compute "essential inputs" (e.g. fun.essential_support). These are variables that not only appear in the function, but actually influence the output.
    • We currently have this as trait GatherLiterals
  • fun.usupport

    • "Untyped" support. Not sure what this is.
    • Equals frozenset(v.uniqid for v in self.support).
  • fun.inputs

    • The same as support, but ordered instead of unordered.
    • Equal to tuple(sorted(self.support, key=lambda ex: ex.node.data())) in expr, not implenented in boolfunc for some reason.
  • fun.top

    • The first variable from fun.inputs.
    • Not sure if we need this, but it is simple enough to add?
    • Equal to if self.inputs: return self.inputs[0] else: return None in boolfunc.
  • fun.degree

    • The arity of a function (N of a function BN => B)
    • Equal to len(fun.inputs).
  • fun.cardinality

    • The cardinality of the relation BN => B.
    • Always equal to 2^fun.degree.
  • fun.iter_points

    • Iterator over all points in an N-dimensional Boolean space (e.g. for 3 variables, 8 vertexes of a 3D cube)
    • The N-dimensional Boolean space is passed as an argument in form of a list of boolean variables, the function takes it's length and uses it to drive the for loop for 2len(bool_var_list) iterations.
    • Equals to for num in range(1 << len(vs)): yield num2point(num, vs)
  • fun.iter_domain

    • Iterator over the 2^degree possible inputs.
    • Equal to yield from iter_points(self.inputs).
    • We currently support the possible input with trait PowerSet, but not an iterator.
  • fun.iter_image

    • Iterator over the 0/1 result for each item in fun.iter_domain.
  • fun.iter_relation

    • Just a zip of fun.iter_domain and fun.iter_image.
  • fun.restrict

    • Takes a point (dictionary of var: bool_value, where value is 0 or 1) and produces a new function where the variables fixed by the point are inlined into the representation (i.e. they no longer appear in the function at all, and are instead ones or zeros).
  • fun.vrestrict

    • The same as fun.restrict, but takes the point as a vector, not dictionary, and calls vpoint2point
    • Despite this link, I still have no idea what it does.
  • fun.compose

    • Takes a mapping of var: function and returns a new function with all of the specified variables substituted for the given functions. It is not quite clear what representations are supported or whether the functions must be the same representation.
    • This is not a rename, but rather a substitution of variables with new functions.
  • fun.satisfy_one

    • If the function is satisfiable, return one satisfying input.
    • For tables, this is simple as looking at the outputs is enough. For expr, this branches base on if the expr is cnf or if assumptions are set.
    • For expr, uses a recursive _backtrack, which iteratively .restricts the .top input to be 0 or 1.
  • fun.satisfy_all

    • Iterate through all satisfying input points.
    • For tables, this is simple as looking at the outputs is enough. For expr, this branches base on if the expr is cnf.
    • For expr, uses a recursive _iter_backtrack.
  • fun.satisfy_count

    • Return the number of satisfying input points.
    • Equals to sum(1 for _ in self.satisfy_all())
  • fun.iter_cofactors

    • Takes a list of variables and then iterates over the "cofactors" w.r.t. these variables. In practice, this seems to be: "consider all valuations of the given variables, then restrict the function to this valuation". (e.g. for vars=[x,y], these are 4 possible functions resulting from fixing x,y in the initial input)
    • Equal to for point in iter_points(variables): yield self.restrict(point)
  • fun.cofactors

    • The same as fun.iter_cofactors, but produces a tuple
    • Equals to tuple(cf for cf in self.iter_cofactors(vs))
  • fun.smoothing

    • This is existential quantification (i.e. f' = f_{x=0} | f_{x=1})
    • Equals to functools.reduce(operator.or_, self.iter_cofactors(vs))
  • fun.consensus

    • This is universal quantification (i.e. f' = f_{x=0} & f_{x=1})
    • Equals to functools.reduce(operator.and_, self.iter_cofactors(vs))
  • fun.derivative

    • The call this the "Boolean difference"... it seems to be "quantification", but using XOR instead of & or |. Not sure what this is used for...
  • fun.is_zero and fun.is_one

    • Syntactic check for trivial "zero" or "one" function cases (as opposed to fun.satisfy_one).
    • We could provide some semantic checks for tautologies and contradictions?
  • fun.box and fun.unbox

    • Convert a primitive type to/from a function if possible. Seems like it only works for 0/1/True/False?
  • fun.equivalent

    • Semantic equivalence of functions.
    • We provide this with the trait SemanticEq.
  • This is not explicitly mentioned, but it seems that operators are overloaded to create functions.

    • We can support it by implementing traits like impl BitAnd for Expression

Expressions:

Expressions call C code since this commit in 2015.

  • exprvar

    • Creates a variable expression of the given name. Optionally, the function can also take an index or indices.
    • Variable uniqueness is checked within a global context and the same object is returned on collision. Note that the index/indices is not related to ordering. That is for unique ID is for.
  • exprvars is the same thing, but builds a 1D/2D array of variables from the given prefix.

  • expr

    • Creates a constant or parses the string representation.
  • Not, And, Or, ...

    • Factory functions for creating expressions. Also operator overloading. These take an optional simplify argument to trigger simplification (see below)
    • Kinds:
      1. primary operators (And, Or, Not),
      2. secondary (Xor, Eq, Implies, ITE), &
      3. higher-order operators (Nor, Nand, Xnor, UnEq, OneHot0, OneHot, Majority, AchillesHeel, Mux).
    • We can probably ignore the higher-order operators for now, and then maybe add some translations for the secondary operators.
  • expr.simplify

    • this is done through C code that then calls the actual simplifying C code
    • for each bool. operator, the code is different, e.g. for and/or, xor, eq, impl, ite
    • It seems that simplification converts to only primary operators and performs some basic reductions (like constant elimination): source
  • expr.simple

  • expr.to_nnf, expr.to_cnf, expr.to_dnf

    • Produce normal forms.
  • expr.to_cdnf and expr.to_ccnf

    • "Canonical" normal forms. Haven't looked into this too much, but I don't think it is super critical.
    • they were actually removed and only remain in documentation (source)
  • expr.complete_sum

    • A DNF with all prime implicants. This is probably just a table encoded as an expression.
  • expr.expand

  • expr.cover

    • Not sure what this is? But seems to be another form of DNF - DNF expression as a cover of cubes.
  • expr.encode_inputs

    • Seems to be just some utility function to get more more info about inputs/ordering/something?
  • expr.encode_cnf and expr.encode_dnf

    • Seems to be related to some private DNF/CNF types? We probably don't need to care about these.
  • expr.depth

    • Calculates AST depth.
  • expr.tseitin | Tseitin’s encoding, creates a CNF expression that is smaller than "naive" conversion but contains auxiliary variables.

  • expr.expr2dimacscnf - A simple string CNF format. Would be nice to have but not essential.

  • There is expr.ast2expr and expr.to_ast, but noone knows what "AST" is here.

Satisfiability is inherited from the function trait.

You can add "assumptions" using "with" that are then used when assessing satisfiability (basically, you are operating as if all the functions were "restricted" to a subset where the assumptions hold). This is fundamentally not a bad concept, but the implementation is slightly weird as I'm not sure how much I trust this being a "global context" operation.

Here, normal forms have their own classes. We probably don't need to go that far yet. But it might be nice to have some flag (e.g. is_dnf) or something to indicate that a normal form is used.

Tables:

truthtable takes a list of variables and a list of 2^n values. Builds a function table.

The documentation for truth tables is missing. Hence I'm just assuming it implements the basic Function trait and conversions to other representations.

The source code contains the following additional methods:

  • table.is_neg_unate - whether a function is negative unate
  • table.is_pos_unate - whether a function is positive unate
  • table.is_binate - whether a function is binate

BDDs:

	* `bddvar` | The same as `exprvar` but for BDD-compatible variables?
	* `bdd.to_dot` | Should just link to the existing implementation.
	* They use compose to rename BDD variables? I guess this is should just work as well.
	* We don't need garbage collection because borrow checker is our friend.
	* A `BddNode` class is available to explore BDD structure, but must not be instantiated and is only managed internally.
	* `bdd.dfs_preorder`, `bdd.dfs_postorder`, `bdd.bfs` | Should not be too hard to implement (assuming we have the BddNode binding as well) but aren't essential.
	* `BddConstant` and `BddVariable` classes | We probably don't want these to be explicitly there, just adding this here to keep in mind that they exist in PyEDA.

Conversions

	* `truthtable2expr` | In PyEDA, this seems to be just a trivial DNF conversion.
	* `expr2truthtable` | Self-explanatory.
	* `expr2bdd`, `bdd2expr` | Self-explanatory.
	* `num2point`, `num2term`, `point2term`, ... | Not sure we want to support these in the exact same way, but we probably need some sort of notion of "point"/"term"/etc. Also, they have the notion of "untyped" points (which seem to be just ordered). We might consider that, but I would be happier to solve the issue of variable ordering more comprehensively.	

Other

	* `iter_points` | Goes through all 2^n "points". Accepts a list of variables, returns an iterator over dictionaries (points; key is a variable instance, value is 0/1).
	* `iter_terms` | Similar to iter-points, but applies to functions.

Ignored

	* We completely ignore "function arrays" for now.
	* `pyeda.boolalg.minimization` | Similar functionality should be available through "optimized" BDD -> DNF expression conversions.
	* `pyeda.boolalg.picosat` and `pyeda.boolalg.espresso` | These are essentially "bindings" for tools written in C that provide a limited set of features that we currently don't use.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant