Re: Homoiconic relational programming language - 03-08-2010 , 11:40 AM
On Mar 8, 5:14*am, Nilone <rea... (AT) gmail (DOT) com> wrote:
programs (examples can be provided on request), but no proposals to
use a relational structure. The projects that are getting anywhere
seem to preserve a traditional text-based editing option.
I am generally in favor of relational everything, but I would note
that traditionally program text has been consistently hierarchical in
structure. Things that can safely stay hierarchical may be things
that don't need relational representation.
The relational notion that records have no set physical order, i.e.
that order is not important to locating a record, may be another area
where there is not a strong fit to programming - where the sequence of
commands has traditionally been so important.
Re: Homoiconic relational programming language - 03-08-2010 , 12:21 PM
On Mar 8, 7:40*pm, hoodwill <chase.saund... (AT) gmail (DOT) com> wrote:
in a relational model, and order can be expressed explicitly.
Furthermore, making these assumptions explicit may help us re-evaluate
traditional issues such as concurrency.
Re: Homoiconic relational programming language - 04-08-2010 , 11:18 AM
On Mar 8, 1:14*pm, Nilone <rea... (AT) gmail (DOT) com> wrote:
read the group.)
I've been thinking about this a lot, originally because of the dreaded
"object-relational mismatch". What is it precisely that causes it? And
I think I have at least a partial answer. The control and program
flow, record at a time orientation of OO makes for irregular access
across data types, whereas relational is all about controlled, breadth
first structuring.The temporally linear control flow and the presence
of uncontrolled side effects on individual records/objects is also
very different from the set oriented, transaction view of the DB
That's why I think the best current substrate for relational
processing would probably be some LISP derivative, e.g. Haskell. There
we have the proper high level constructs to deal with lists and sets.
With a little bit of syntactic sugar, bona fide relations should be
easy enough to embed. Side effects, they could be avoided altogether
as in SQL (when used purely as a DML), or they could be treated
transactionally and in a principled manner (e.g. via Haskell's
monads). And most of all, the macro facility of LISP dialects is
strong enough to actually mutate your accesses into something an RDBMS
could chew efficiently (i.e. entire groups of monadic functions could
be transparently turned into a single transacted MERGE under Oracle,
without the module microstructure and call flow of the code getting
into way; good riddance to cursor loops, which are perhaps the single
worst offender in OR/M).
Of course this sort of thing would also need some interface innovation
on the RDBMS side. For example, the DBMS sees this sort of client as
someone who wants to do a whole lot of very complicated, set oriented
processing, involving literal data. Under SQL a typical state save
would probably contain thousands to tens of thousands of individual
updates within a single transaction, to multiple tables, preferably
specifying related data together as short-lived temporary relations
which might have little to do with the underlying schema. Neither SQL
nor the rest of the data sublanguages fare well with this sort of
thing. E.g. have you ever tried to enforce even an arbitrary two-to-
one attribute functional dependency over a single table in one SQL
clause? The simplest way to do that would be to declare a literal
relation with the mapping and then merging it to the destination
table, but that facility just ain't there with SQL or any other
database language I know of.
Still, I think if somebody was to pull this thing off, that'd be a
*huge* productivity *and* performance boost. Precisely because the
homoiconicity could be leveraged to its fullest, and because once
you're into the pure data, relational mindset, there are some
exceedingly powerful programming idioms floating around for that. They
go under the heading of complex event processing, truth maintenance/
belief revision systems (very relevant for the view update problem,
btw), and at least one early system (whose name I cannot remember)
which implemented pretty much all of the online relational primitives
which have been proposed in the recent past (memorex, no, cerebrex,
no, it was something vaguely to that effect).
Re: Homoiconic relational programming language - 04-08-2010 , 11:25 AM
Sampo Syreeni wrote:
breadth nor toward depth.
Re: Homoiconic relational programming language - 04-08-2010 , 01:09 PM
On Apr 8, 7:25*pm, Bob Badour <bbad... (AT) pei (DOT) sympatico.ca> wrote:
really is quite difficult to put into words. I was describing my
intuition as somebody who has to produce code in both paradigms.
Algebraic and declarative are two other words for the relational
I chose breadth-first because that's how I, and I think most
relational, coders lay out their code. The idiot level for OO/
declarative is to go with the control flow, structure it irregularly
with regard to the access depth, and so on. Within the RM the same
thing is to start with one set of tuples/relation and then
progressively lay out new layers outwards from it, using joins as a
symbolic, set-level equivalent of good-old record pointers. The
hallmark of that fu is copious one-sided outer joins branching out
from a single base table (I just had to update some code like that).
But even what I think of as being proper style puts out all of the
relations, their mutual dependencies, query constraints, and so on,
more or less at the same level; there is no real nesting involved as
there is with hierarchical program flow within a structured,
procedural language. (And OO languages are even moreso than the older
ones like Modula.) Plus, our common-day notion of recursion tends to
be rather limited compared to what you can do within a Turing complete
language: we don't really do ragged recursion over several datatypes,
for example. Again, a purposeful choice, because that makes our
optimizers much simpler and more effective for bulk operations; but it
does limit DB's from doing certain things like the topologically
oriented stuff I mentioned, or perhaps very complex and plastic data
models, such as they use in CAD/CAM, bioinformatics, or linked data;
there is nothing in the RM as such that would stop it, and I'm a
staunch advocate of that sort of development, but in current
relational data languages you can't really go there.
One telling sign in this regard is that our declarative paradigm
hasn't really gone past finite FOPL at the standards level. Sure, we
have special case solutions like Oracle's CONNECT BY clause, and the
newer SQL's subquery factoring clause. They can more or less solve the
classical BOM problem. But do they really do much else, so that we
could e.g. support introspection and ragged recursion over multiple
data types simultaneously, while keeping to the declarative roots of
RM? Is there for example a sturdy implementation of a multiple table,
parallel transitive closure operator anywhere, which meshes well with
basic relational algebra? (Least of all a least-fixed-point operator
which would be sorely needed for RDF/OWL-sorta work.) I don't think
so, and that's pretty much the minimum of what they're doing with
object frameworks right now -- even when doing it at the highest meta
levels that their languages allow.
grant. The former two, mostly, Lorentzos I don't seem to remember
distinctly. I've grown into the relational camp, even "ideologically
speaking" (e.g. the RDF folks have long since stopped liking me much
because I now and again tell them, in their own terms, that they're
dealing us EAV, and that that's been proven bad many times over), but
still Date and Darwen do not seem to be contributing much in my
judgment. Except perhaps demagoguery. Perhaps the best example of this
to me is how they handle objects within the relational model: they
approve of them, but never really seem to square away the difference
between flat and rich relations at the theoretical/conceptual level.
Before you jump in, let me outline my idea about how it should be done
as well; no point in judging without counterpoint, you know... To me
the RM is about axiomatic semantics, as opposed to intuitive ones (the
latter have their value, they belong to the third level of the ANSI
model, the conceptual one), so you have to be able to perform as a
mathematician when developing the theory further. That means speaking
about the theory purely based on the axioms/properties it follows. And
so complex objects could in theory be admitted into the model...when
they *behave* as though they were atomic values when referred to as
such. At that level of pointlike members of sets/unique names, objects
are pretty cool as part of the model. But then on information
principle grounds they should also be broken down into atomic stuff
elsewhere in the database, so that we can use relational operators to
drill into their real guts. No nested stuff, a minimum amount of type
specific operators, and so on. Date and Darwen *never* spell that out
loud, which in my mind means they're betraying basic information
modeling principles in service of good publicity.
Also, they *never* stop to consider the fact that relational algebra
might not be good enough for certain data handling tasks. Do pardon my
language, but they seem like blind believers to me, and as such not
very useful in developing RM further. Call me a passionate heretic if
you will, then, but I'd rather get my information somewhere else.
*Because* I care about sound principles.
Re: Homoiconic relational programming language - 04-08-2010 , 02:02 PM
Sampo Syreeni wrote:
Re: Homoiconic relational programming language - 04-08-2010 , 06:12 PM
On Apr 8, 6:18*pm, Sampo Syreeni <de... (AT) iki (DOT) fi> wrote:
the following papers in particular:
On understanding data abstraction, revisited (William R. Cook, 2009)
Object-oriented programming versus abstract data types (William R.
A tutorial on (co)algebras and (co)induction (Bart Jacobs, Jan Rutten,
Objects and classes, co-algebraically (Bart Jacobs, 1996)
Re: Homoiconic relational programming language - 04-08-2010 , 06:50 PM
On Apr 8, 3:12*pm, Nilone <rea... (AT) gmail (DOT) com> wrote:
are special. Therefore the choice of a set for an object in W.Cook's
latest paper is not very convincing. When comparing abstract data
types and objects William interprets objects as characteristic
function. OK, characteristic functions are defined on sets, this is
why the choice of a set as an example is suspect. What is
interpretation for other kinds of objects, e.g. free monoids (aka
strings of characters), relations, etc? Or, maybe, objects always
assume set structure so that instead of monoids we study semirings
(sets of strings), relations as sets of tuples, and so on?
Re: Homoiconic relational programming language - 04-08-2010 , 09:28 PM
On Apr 9, 1:50*am, Tegiri Nenashi <tegirinena... (AT) gmail (DOT) com> wrote:
sufficient to properly answer questions and objections about those
papers. Still, I'll try to formulate a coherent reply.
What I got from those papers is that algebraic data types (arrays,
tuples, relations) are structural abstractions, while Cook's
procedural data abstractions, a.k.a. co-algebraic types, are
behavioural abstractions and orthogonal to algebraic types.
If you view a value as an element of a closed domain, that's an
algebraic view. Since the domain is closed, equality and other
relations can be defined over the domain. Algebraic data structures
allow us to derive and compose closed domains as well as the relations
over those domains. In a closed, existential domain, elements have no
behaviour, only value. It seems to me one of the themes of TTM is to
properly describe the algebraic view.
Classes / PDAs / co-algebraic types define elements of an open domain
via characteristic functions. Since the domain is open, equality and
other relations can't be defined in general. Composition and
derivation define behaviour, not value.
Modeling values as objects is inappropriate - since the domain is open
and we only have a characteristic function by which to identify
elements, it is always possible to define an imposter which breaks our
assumptions. More than that, since we can't define the domain,
relations over the domain are impossible and composition / derivation
of new domains don't work as expected.
Similarly, modeling objects as values is inappropriate. The closed
domain is always too restrictive, since we can't foresee all future
behavioural requirements when we define it in the first place. In
addition, it forces us to choose / deal with values that have no
meaning, e.g. file handles, process ids, etc. Composition /
derivation of new domains still don't work as expected, because it
enforces relations over the domain that just don't apply.
That, in essence, is the object-relational mismatch from my current
point of view. I hope I'm on the right track!