According to Wikipedia, the word "athame" (referring to a knife used in ritual magic) is a corruption of "artavus," a quill knife ("a small knife used for sharpening the pens of scribes" -- "cultellus acuendis calamis scriptorii," as found in early editions of the Key of Solomon). Of course the use of such knives in ritual magic has changed over the years and has been subject to fanciful interpretation and outright invention. However, the subtle meaning (as I see it) is that the pen is equivalent to the knife or sword: a tool that cuts between good and evil. Truly writing is a "Scheidekunst," though I see that word not as Novalis sees it (as a perverse artificial separation of philosophia into separate scientific fields that do not inform one another), but as "the art of decision." Surgeons wield the scalpel to access diseased tissue and remove it; critics wield the pen to reveal disorders of thought and excise them.
Subscribe to:
Post Comments (Atom)
15 comments:
I have a tiny Schrade Uncle Henry lockblade on a chain hooked to my keys. My athame, and indeed is the pen mightier than the sword?. Anyway is there a fast algorithm which can project large square matrices into a certain matrix subspace? The highly patterned basis for this subspace is given in this post of mine. Then one would compute the n-1 parameters needed by what I've given here.
Just skip over the general silliness of the rest of the posts.
Greetings!
I have to read your problem statement more carefully before I commit to saying much, as it's been a while since I studied graph theory. Iterative methods for Ax=b and $Ax=\lambda x$ are my expertise, so I tend to turn everything into vectors, which looks more or less like what you did, in everything but notation. (Of course you're not doing a Krylov method, but I mean that I think about projecting things onto subspaces of vectors a lot.) I'm curious to learn more, though! Where would be a good place to start?
I'm curious about your blog, too. Would you like to chat more offline?
I'm trying to figure out what that phi is all about. Does it tell you which vertices are connected? I hope I'm not sounding _too_ ignorant ;-P
Well yes it does tell how to connect vertices. Each column represents an edge. The rows correspond to vertices. An edge e_k connects V_i to V_j if Phi_i,k = 1 and Phi_j,k = -1, the rest of the column is 0. An arrow points from the +1 to -1 vertex. Columns corresponding to edges that point back to a given vertex, the loops, are 0. The projection mapping λ intends to remove all inefficient transactions, the cycles, from a matrix representing exchanges between vertices. I wrote about a 100 pages of notes in pdf form here. The economic problem I'm thinking of is an update to a section on applications. I guess it begins in this post and I began to summarize here. for incorporation into my notes. Then I continued to add to the idea in further posts using sage output for examples. There was some earlier stuff on graph theory and free boolean algebra in the blog doesn't pertain to the question of projecting matrices to a subspace. Mtrices themselves are members of a vector space just as vectors. Plus I'm dealing with linearly mapping matrices to row vectors.
Oh and little φ represents the corresponding linear mapping from the square matrix space over the edges into the space of row vectors.
The proper term for the matrix Φ is the incidence matrix of the multigraph. It's like a patch bay, but you can't patch from column to column, only one patch per column, or none. A zero column doesn't indicate which vertex is incident to the loop only that it is a loop.
Thanks for the explanations!
Just looking at how you represent the matrices as vectors, there could be many different ways to optimize that operation, depending on how large and how sparse the basis matrices are. Are you interested in optimizations at that level (I guess I would call it the "computer science" level)?
At the moment I would say they're sparse, skew symmetric matrices in the basis. I work with small manageable examples. I'm intellectually curious about studying transactions amongst a large group of entities and am trying to get a feel for practical size of models versus state of the art super computing. What size network? Desktop computing. What size network? Just an informal idea. How fast can systems of linear equations be solved real time. Sparse versus dense? The graph here has n vertices and n^2 edges. Each bvertex connected by two edges of opposite orientation plus a loop on each vertex. Vectors over the space of edges can be n^2 tupples or n×n matrices which are better for conceptualizing.
The i,j entry of the matrices over the space of edges represents a quantity being transfered from vertex j (row) to vertex i (col). The sum of the rows is a vector of amounts sent out by a vertex. the sum of the columns represents an amount received by a given vertex.
φ(M) = ρ(M) - ρ(M')
is always in the subspace over the vertices orthogonal to (1,1,...,1) an n tupple. In other words the sum of the components of φ(M) is always 0.
I'm reading your (PDF) notes on linear algebra and graph theory now, so your notation is making more sense. It confuses me a bit because I'm used to thinking of a sparse matrix as a graph (e.g., "partition the matrix"), rather than vice versa.
Gosh, can't you just write "rational numbers or better" instead of "fields of characteristic zero"? ;-P
Let me pick your brain a little more about "desktop computing," but only because you mentioned wanting a fast implementation. That means you care about how long these computations take, which mean that they are very large (for a desktop computer), you have to do a lot of them in a row, or both. I'm curious which, and also (if I may ask) how large these graphs and their corresponding matrices tend to be.
Here are some ideas, in increasing order of effort:
1. Represent everything using conventional sparse matrix data structures in your mathematical scripting language of choice. When you map from matrices to vectors, keep those vectors sparse (n x 1 sparse matrices).
2. #1, but also think about how to keep sparse things as sparse as possible. Some operations may increase the number of nonzero entries. (Try sparse Gaussian elimination on a "northwest-pointing arrow matrix," for example.) Projections are going to look like x -> (I - V V^*) x = x - V (V^* x), where V is a matrix whose columns are the basis vectors. That inner product V^* x may wipe out your sparsity anyway, but you might like to arrange the columns of V so when you do that product column-wise, the sparsity gradually increases. (This is analogous to the trick discussed in Golub and Van Loan 3rd ed., for avoiding unnecessary operations when you construct an explicit Q factor from the results of a Householder QR factorization.)
3. Given how regular the structure of those basis matrices look, maybe you'll want to represent them as something other than a regular sparse matrix. They have dense columns and rows, and dense substructure is always a great thing to exploit. You could even represent them as outer products.
4. Don't store those capital-Phi matrices, just compute their entries from the original graph on the fly as you need them. This may or may not be faster.
Anyway, just a few random ideas. Let me know what you think!
Sorry I didn't know the more casual language: rationals or better. I'm also interested in ring modules ZZ^n leading to linear vector space formulation of multigraphs.
I'm interested in how practical and useful this formulation of transactions in a network might or might not be.
Suggestion 4 I wrote down such an algorithm which uses a database structure with 2 fields corresponding to vertice and defining direction and where the records correspond to the edges. But one would have to compare the storage requirement of this structure vs the incidence matrix.
Incidence matrix: V×E×2bits, with whatever compression obtained for sparseness.
Two Column Table:2×E×number of bits for integer representation of vertices, that is the number of bits per field.
I'm just a theoretic type who dabbles in high level programming languages. I'll have to come back to the other ideas you suggest.
We can in some circumstances reduce N^2 transactions down to n-1 parameters. I want to study functional relations between transactions of various commodities and currencies, or also differential or difference equations. For instance have a look at this simple model of loans and interest exchanges in a network. I'm only scratching the surface presently.
Oh and the mapping λ(M) is Ax=b as in this post where I'm using formal inner product notation.
mmm Galerkin projection... the way you get $\lambda(M)$ looks like the way you get a system of linear equations out of a finite element basis.
Your "database structure" sounds rather like an adjacency list data structure.
Oh, and when I said "rationals or better," I was being silly ;-) I am a numerical analyst so everything is a bunch of bits to me!
I'm checking out Galerkin. λ(M) is another square matrix the same size as M, and φ(M) is a row vector. The range of both is an n-1 dimensional space.
φ(X-λ(X)) = 0 vector.
Hey, haven't heard from you in a while -- did the Galerkin approach work out?
Post a Comment