SAP ASE Repository

A repository for SAP (previously Sybase) ASE stored procedures and scripts
Please contact saplump at yahoo dot com to contribute, to report problems with what's available, or to suggest something new

Please feel free to use anything on this page. All I ask is that you don't pass it off as being authored by yourself
If you have a project that will make use of one or more of these procedures, which you will need to hire extra staff for, would you please consider contacting me to see if I'd be interested in the work?
Find out more about Raymond Mardle

Page Last Updated : 25th May 2023 : updated user defined functions (UDFs) sp_rpm_str_sep & sp_rpm_str_sep_int changed to use datalength instead of log10 to get number of digits : sp_rpm_tablesize for ASE 16.0 - added four new result column to count the number of columns in each tables' indexes, save results in a pre-existing temporary table, and allow restrictions to compare result columns
11th May 2023 : user defined functions (UDFs) sp_rpm_str_sep & sp_rpm_str_sep_int : sp_rpm_tablesize for ASE 16.0 - improved performance and new option to output with thousands separators : sp__optdiag for ASE 16.0 without and with user defined function - bug fixes & output integer values without decimal places, and with thousands separator(s) when version with UDF loaded
25th Mar 2023 : sp__optdiag for ASE 16.0 - better partition handling, can output known integers without decimal places, and cater for values greater than 9 : sp_rpm_tablesize - add column count, flag columns and count of columns with optimiser statistics : updated presentation
10th Mar 2018 : sp__optdiag for ASEs 15.5, 15.7 & 16.0 and sp_rpm_summ_stats can / will now output partitioned stats in condition creation order instead of partition ID order for hash and roundrobin partitioning, which can wrap around the int range
5th Mar 2018 : sp__optdiag for ASEs 15.5, 15.7 & 16.0 and sp_rpm_summ_stats can / will now output partitioned stats in condition creation order instead of partition ID order for list and range partitioning, which can wrap around the int range
1st Mar 2018 : sp__optdiag for ASEs 15.7 and 16.0 now executed as owner
 5th Feb 2018 : Performance update made to sp_rpm_summ_stats

Cheating With Statistics in SAP ASE : A PDF of a presentation describing the statistics based system procedures on this page, and how to use them to "cheat" where statistics updating is concerned (25 Mar 2023)

sp_rpm_str_sep : User defined function to output up to decimal (38, 19) values with thousands separator(s). ASE 15.0.2+ (25 May 2023)

sp_rpm_str_sep_int : User defined function to output up to decimal (38, 0) values with thousands separator(s) - needs to exist before sp__optdiag_u_UDF is loaded to create sp__optdiag that outputs integer values with thousands separators. ASE 15.0.2+ (25 May 2023)

sp__optdiag : Updated versions of Kevin Sherlock's procedure to give the same output as optdiag but using a stored procedure. Updated 21 Jun 2017 mainly to to handle column groups containing columns with an ID greater than 256. 18 Aug 2017 to output uni[var]char as strings when @elo = N. 31 Aug 2017 to make them even more @elo & other tweaks (the procedure was over 3.5 times faster than optdiag in both ASE 15.5 and 16.0). ASE 16 version updated 25 Mar 2023 to have better partition handling, it can output known integers without decimal places, and cater for decimal values greater than 9. ASE 16.0 version updated 11 May 2023 to fix bugs and to have two versions, one with a user defined function that outputs integer values with thousands separators, and the other to output integer values without them
ASE 15.5 (10 Mar 2018 : partition creation order b), 15.7 (10 Mar 2018 : partition creation order b) or 16.0 (not using UDF 11 May 2023) (using UDF 11 May 2023)

sp_thresholdaction : In 2004 I was working for a company that was having problems with their transaction log sequence being broken when they tried to load production log dumps into a standby system. It was traced to an issue where the same log threshold was crossed multiple times within a very short time frame, causing the same file name to be reused for successive transaction log dumps. I amended the procedure to handle these problems, wrote an article that appeared in a 2004 issue of the ISUG Technical Journal, and made the procedure available to the ISUG community. I have made further changes for this version to make it more flexible. It is valid for all versions of ASE from 15.0 onwards (and probably pre-ASE 15 too), but it will need some customisation for your environment, depending upon whether milliseconds are to appear in the file name and where the files are to be stored.
(12 Jun 2017 : print and error handling changes)   updated draft of the article

sp_rpm_tablesize : If anyone has ever spent time administering ASE, they have probably written a procedure to get table size information for more than just one table at a time using sp_spaceused. This is my take on such a procedure, but with some useful (for me) added abilities. The output can be sorted by any of the columns : by use of the @sort parameter. The results can be whittled down using a specific table name or pattern as you'd expect, but also by restrictions on any one of the columns : by use of the @res_type and @res_value parameters. The name of a table containing the name or id of specific tables to output the details for can be supplied : by use of the @tab_list parameter. A simplistic check can be done to identify tables that have a row count that does not match an expected data size when the average row size is applied : by use of the @check parameter. These versions use "unsigned bigint" for the row counts, so they cannot be used in pre-ASE 15.5 servers. Although the one for ASE 15.5+ servers can be used in ASE 16.0 servers, a new methodology for getting size information means that it is not totally accurate. Consequently, use the one for ASE 16.0+ servers to get accurate results. If the UDF sp_rpm_str_sep_int is loaded, a new parameter will output values with thousands separators : by use of the @use_udf parameter. A free text set of additional restrictions can be specified : by use of the @add_rest parameter. The restriction columns can compare with other columns : by using '=' or '<' followed by a column letter in @rest_value. The results can be saved in a pre-existing temporary table : by supplying its name in @save.
ASE 15.5+  (14 Mar 2017),  ASE 16.0  (25 May 2023)

sp_generic_device_free_space : For ASE 15.0+  (13 Feb 2017)

sp_rpm_summ_stats : In my 23+ years of using ASE (with its different names - does anyone else remember Sybase SQL Server 4.9.2, and were you as puzzled as I was by the jump from four to ten?) I seem to have spent a significant proportion of my time investigating statistics. This is one of the procedures I wrote to help with that. It summarises statistics without outputting the volume of information that {sp__}optdiag outputs. It uses "unsigned bigint" for the row counts, so it is only for ASE 15.5+. Like all the procedures I write from scratch, use "help" or "?" as the first parameter to get information on how to use it and, in this case, an explanation of some of the output. Updated 14 Sep 2017 to have partition summary information and individual row counts for partitioned tables.
For ASE 15.5+  (10 Mar 2018 : partition creation order b)

sp_rpm_custom_stats : The procedure above and this one were written due to issues seen with joining multiple multi-million row tables. An article I wrote for the Third Quarter 2006 ISUG Technical Journal (P1 & 2  P3 & 4  P5) discuses the issue in more detail and what was done to try and solve the problem. Writing this procedure to just change the requested steps for tables that could not be partitioned was much easier than modifying it to handle the various statistics attributes that can be changed in ASE 15.7+ tables that can be partitioned. There is a chance that it will not behave correctly for some combinations of table types, index types and partition types. If so, please let me have the details and I will endeavor to sort the problem out. As I found out when I was writing the original version of this procedure, statistics can stand up to some terrible mis-handling. If there are problems, delete the statistics and run an update statistics. The procedure should work for ASE 15.0+, where only requested step and density values can be changed if pre-ASE 15.7.
(31 Jul 2017 : tidy up help and change @silent default to N)

sp_rpm_shuffle_stats : Consider this scenario. A table in a database contains sets of rows that are grouped by a unique date. Each week day, a new set of rows is added to the table for that date; and when that day is over, the rows for, say, 28 days ago are deleted. The database contains many such tables, many of which are very large, and there is near constant activity against the database during the week, so it is not possible to update the statistics every day to pick up the new date's rows. At the most, the statistics for the tables in the database are updated once a week. However, the stats updating maintenance window is sometimes shortened by other activity and so not all of the tables in the database get their statistics updated some weekends.  In ASE 15.7 ESD#2+, it is possible to use "out of range" but that will only really be accurate for the first day's data added after the statistics have been updated. Each subsequent day will lead to increasingly incorrect assumptions for a specific new date. This procedure shuffles statistics for the [small|big]datetime grouping column of such a table. Running it after the rows, say, 28 days ago have been deleted will result in the pair of statistics entries for that date being removed from the start of the statistics, each pair of entries for the other days moved down one pair, and the last entry's pair being changed to have the new date and the weight from the first entry. It relies on there being two statistics entries for each unique date (with the first entry of the pair having a NULL value and a zero weight), and the weights for each date (and hence the number of rows) being approximately the same. It is the process that was mentioned under Rolling Dates on page 5 of the ISUG journal scanned for sp_rpm_custom_stats - which I never got to implement for the company that employed me when I wrote the article. In ASE 16.0, using a datatype of date resulted in two stats entries for the oldest date but one entry of <= for the other dates, so columns with a datatype of date are not valid for this process. Use of bigdatetime, datetime and smalldatetime produced suitable stats entries - when sufficient steps were allowed.
For ASE 15.5+  (26 Oct 2017 : better bigdatetime handling and elapsed time added when debug used)

sp_rpm_copy_stats : Consider this scenario. The schema for tables in a database are usually altered using the following method : 1) the existing table is renamed 2) a new version of the table is created (which is usually simply adding new columns to the end of the existing schema) 3) the data in the renamed table is inserted in to the new version of the table, with only the new columns having new data that is not in the original version of the table 4) indexes are created on the new version of the table 5) index statistics are updated for the new version of the table 6) if any problems at any prior stage, then the next step is not done and instead the new version of the table is dropped and the table is renamed back to its original name 7) if there were no problems, the old table is dropped. For a small to not too large table, this process doesn't take much time. For a large table with many indexed columns, the whole process can take a long time. Because the data in the existing columns hasn't changed, the statistics for those columns is valid for the new version of the table as well. This procedure copies the statistics (summaries, values and weights) for the existing table and concludes with there being statistics for the new version of the table. In the scenario above, it is simply a matter of changing the object ID to be that of the new version of the table. However, stage one of the procedure can also automatically handle existing columns changing position in the new version of the table, as long as it retains the same name in the new position. If any columns with statistics change position and / or name, then the procedure can also handle that but it needs to be given the current and new column names in the parameter @col_manual in the format '<original>=<new>[ |,|;]'. For columns that change position, that involves changing the colidarray values to have the new ID values for the moved columns. Steps 1), 2) and 3) would be as above. Step 4) would now be the execution of this procedure. Step 5) would be the creation of the indexes, which updates the statistics for their leading columns, unless that is stopped (I don't know how much time is added to index creation by doing leading column statistics updating). Step 6) would be updating statistics, but only for those new columns that are in an index as a non-leading column. Step 7) would be the old 6) and step 8) would be the old 7). Stage two to handle columns changing datatypes (as long as its possible to explicitly convert between the current and new datatypes) is complete. Stage three to handle changes to partitioned tables is complete and it can also handle indexes changing if index ID is ever stored in sysstatistics. Stage four to handle cross-database and cross-ASE server stats copying is complete.
For ASE 15.5+  (25 May 2018 : extended to handle cross-database and cross-ASE server stats copying, and improve table and user checking)  (14 Sep 2017 : example 1 output)  (14 Sep 2017 : example 1 [sp__]optdiag output)  (14 Sep 2017 : example 2 output)  (14 Sep 2017 : example 2 [sp__]optdiag output)

sp_rpm_append_stats : Consider this scenario. The same situation as for sp_rpm_shuffle_stats but the tables are not fully populated, and new sets of data are added every day but the oldest set of data isn't deleted. This procedure appends a new pair of histograms for the first date more recent than the most recent one with stats, and massages the weights for the existing stats and the information in the summary row.
For ASE 15.5+  (31 Oct 2017)

ASEBCPipe.ksh : Unless you are the DBA in both ASE servers (for example to set up for and use proxy tables), the simplest and best way to copy data between tables in them is to use BCP. If you stick to the SAP documentation, then a file has to be used to store the data in between the BCP out and in. However, it is possible to use a pipe to join the OUT and IN together and do both simultaneously, which takes about half of the time of doing both separately. Many moons ago, between employments, I had ASE 12.0 and a Korn shell interface on an old laptop, so I wrote a Korn shell  script to use a pipe to perform this type of copying between two already existing tables, plus copy to / from a compressed file. I have now updated it to handle ownership, allow the destination table to be created, drop it first if it is in a specific database, and be more selective about which indexes are dropped and re-created. The new version of the script was updated on a virtualised RHEL 7.4 running ASE 16.0 SP03 PL02, so it uses Linux line terminators. Use unix2dos or something other than notepad to read it in Windows.
For ASE 15.5+  (1 Feb 2018)

MultiASEBCPipe.ksh : The above Korn shell script only operates on one table at a time. This Korn shell script can be given a list of up to 99 tables and it will execute the script above for each of them, in the background and simultaneously (which could overload the two ASE servers, so a maximum concurrent can be specified). All of the tables can be processed in the same way. Or, flags can be told to only operate on some of the tables in the list using inclusion or exclusion lists of tables. The new version of the script was updated on a virtualised RHEL 7.4 running ASE 16.0 SP03 PL02, so it uses Linux line terminators. Use unix2dos or something other than notepad to read it in Windows.
For ASE 15.5+  (17 Jan 2018)