Today I added a handful of lines of code to calculate the average detail band height (so far) and use that average to reduce the number of times group header bands are “orphaned” at the bottom of a page. I’ll admit, I’m thinking strongly that I really need to use a figure between the average and maximum, but for now I’m going with just the average. Of course, if all detail bands in a given report are the same height, it won’t matter anyway.
I’m using PollyReports in a production system currently; in fact, that’s how I came to realize this was needed. After a bit of live testing, I may modify the algorithm a bit more. Right now I just want to see how much difference it makes.
Some time back I made a post about the development of PollyReports, and I gave code line counts based on Robin Parmar’s lines-of-code counter which ascribed a truly huge number of lines to Geraldo. While I knew it was more complex than PollyReports, I began to feel that there had to be some mistake… it just couldn’t be THAT big.
So I took Robin’s program apart and rewrote it, keeping his (or is it her?) line counting mechanism intact but altering the traversal scheme so that only *.py files would be counted, and so that they would be listed in a fashion similar to the Unix/Linux du command. Using the current 1.5.1 version of PollyReports, the module itself weighs in at 262 actual code lines, 388 total lines (including comments and doc strings). Using the version of Geraldo that I have downloaded, the total count for source files (excepting the effectively empty tests folder) is 1,785 actual lines of code, 4885 total lines including comments and doc strings. I’m pretty sure that the code I abstracted from Robin’s script is not good in all cases; the docstring detector will not detect all docstrings, and may be confused by some literal string assignments (basically if you put three double quotes on a line by themselves, you’ll confuse it). However, these counts do seem more reasonable.
Geraldo is almost 7 times the size of PollyReports, still pretty big, but not over 340 times as I originally reported it. I think Robin’s code may have been tallying the documentation files as well as the actual Python code.
I was going down the highway on my motorcycle the other day, thinking about a project I’m doing for a customer. We just went live with it, and one of the first things I did (after transferring data from their old program to the new one) was to disable the data reload script. I wrote the script for my convenience; it clears the tables, then loads them again from the export files I was using to provide sample data. Obviously this would be bad, applied to the production database.
Then I found myself thinking about how often we visit the edge of disaster. After all, here I was, going down a two-lane road on a motorcycle at 55 MPH. Everything was going quite well… but what would happen if I put a foot down?
Well, it would hurt, that’s what. More than likely I’d break or tear something I might need later… it’s even conceivable that I might wreck my bike and kill myself.
So sure, I could be driving a car, and putting my foot down wouldn’t be a problem. Doesn’t mean I couldn’t kill myself just as effectively by doing something insanely stupid but very easy to do. Like run off the road into a light pole.
At least, on the computer, I could disable the reload script. This wouldn’t prevent me from accidentally typing “delete from important_table” in the MySQL user interface. MySQL does have a “safe mode” where it needs a WHERE clause before it will run such a query. But you have to remember to invoke it with the right option.
Do I have a point? I’m not sure. It’s just that we skate so close to various sorts of disasters all the time, and rarely think about the consequences until it’s too late.
Like the man in the uniform used to say, be careful out there.