On September 11, 2013, members of the IBM Domino Support team shared their tips on using the Database Maintenance Tool (DBMT) and Compact Replication in Domino 9.0.
Attendees were given an opportunity to ask a panel of experts some questions. The call was recorded and slides were made available.
Follow highlights from these Open Mics live on Twitter using #ICSOpenMic or following us on Twitter @IBM_ICSsupport.
Topic: Using the Database Maintenance Tool (DBMT) and Compact Replication in Domino 9
Day: Wednesday, September 11, 2013
Time: 11:00 AM EDT (15:00 UTC/GMT, UTC-4 hours) for 60 minutes
For more information about our Open Mic webcasts, visit the IBM Collaboration Solutions Support Open Mics page.
Using DBMT & Compact Replication in Domino 9 Open Mic Sep 11 2013 (edited).mp3
Q: On a clustered environment, if no replica exists for a db, would DBMT create one - or would it simply not run on the single db?
A: It will not create a database replica. It would still run on a single database that has no replicas.
Q: Documentation states that the -compactThread and -updallThreads should be based on # of disks backing the data dir...shoudn't this be based on # of CPU threads?
A: Compact is disk limited not CPU limited. So it really should be based on the number of disks, with some influence of the CPU. So if there are significantly more disks than CPUs, you would need to turn down the number of threads
Q: Will this interfere with compact -a ... for archiving?
A: A separate schedule for your server based archiving needs to be maintained.
Q: So what is the purpose to check the CLDBDIR?
A: In most clusters there are at least 2 replicas... The check with cldbdir.nsf ensures only one database would be compacted on a given night leaving the other available... it is checking if the cluster replica exists for a specific database so compact will not put the same database offline on all servers at once. DBMT makes the nightly decision on whether to compact based on the day and the index in the cldbdir.nsf for that server/database.
Q: Does DBMT change DBIIDs on TX logged databases?
Q: If DBMT removes the Soft Delete entries, does that mean a user's Trash will be emptied in their mail?
A: It is just expiring the old ones in the trash. This used to happen on db open.
Q: OK, so if we keep Trash items for 5 days, only items >5 days old would be removed -- correct?
A: So it still respects the internal set for emptying trash.
Q: What is proper DB maintenance? If you have time, please provide a link with proper maintanence for heavily used DB's please?
A: Most admins will know if they are running an application that periodically runs into the IDTable fragmentation issue.. They have been using the work around which is very Admin Intensive... Compact Replication gives the Admin a means to setup up an automated process to address this issue in a very efficient manner..
Q: Did presenter say that NoteIDs are changed, but UNIDs maintained?
A: That is correct, the NoteIDs will be changed and the UNIDs maintained - exactly like a new replica on another server just as if you created a new replica.
Q: Have they added a way to see the progress of the Compact task?
A: Not yet... For Compact Replication I still manually watch the size of the .REPL file and how it's progressing.. When it gets close to the .NSF file size I know it's almost done.
Q: For the log.nsf example, during the renaming, what happens to the logs? Are they are kept in memory ?
A: There is a .REPL file that is the new replica... Nothing in memory... Slide 28 shows how that works.
Q: Is Compact Replication OK to use with Traveler databases?
A: If your goal is too minimize Server Downtime and these Traveler databases are always in use, it should be fine...
Q: Are you able to use a program document greater than days of the week?
A: You can schedule the program document to run at regular intervals or in the time range also.
Q: Is there any way for the administrator to see how full the ID tables are for a database?
A: I think there is.. We'll have to check on that.
Q: Are there any special considerations for Compact Replication on databases using DAOS?
A: Compact replication is compatible with DAOS. We have not had any reported issues so far.
Q: Is there a way to get the percent full of an id table on 8.5.3?
A: Unfortunately no...
Q: What should the notes.ini parameter be on slide #9? I am referring to the parameter MailFileDisableCompactAbort. I thought the speaker said that was not correct ?
A: He was referring to the explanation of the parameter being incorrect not the spelling of the parameter of the value it is set to. This parameter prevents the router from interrupting compact being run on a mail file.
Q: I just asked a question about the Readers fields and it was answered clearly in the call, but I just forgot to ask about the "Who can view this document" property (when "All Readers and above" is unchecked)... will the server still be able to move documents over to the new .REPL file? I mean, in case the server ID is not marked there...?
A: Yes - this will work as the server will have manager access to all databases that are hosted on it.
Q: Any program file I can run (DBMT?) that I can use to detect databases that are Not enabled for DAOS and than compact them to enable DAOS?
A: Unfortunately no. However, you can use the Admin client if you want. You can also enable DAOS in your templates as well.
Q: Documentation says that the -compactThreads and -updallThreads should be based on the number of disks backing the data directory ... is that right??
A: Yes to a certain extent. If the number of disks significantly out number the CPU, you need to scale back the threads due to compact being I/O bound.
Q: In an environment where archive transaction logging is used for backup it's important to know when a copy-style compact runs on the databases because after that a full backup has to be done. Can I achieve this using DBMT? Can the compact style be configured so I could run a -b Monday through Friday and a -c on the weekend?
A: At the moment the only option for compact available with DBMT is the copy-style compact. Even when running compact with a -b or -B option, if there are some changes to the database the DBIID can also be changed automatically by even both styles of compact. It's usually a good idea for administrators to monitor DBIID changes through the console log and in some cases it's possible for the archive software to pick up on DBIID changes and then take a full backup of the database at those times.
I thought it was both -b and -B that changed the DBIID but after talking with a colleague this morning, it's the just the -B that changes the DBIID. If they want to do the compact -b, which doesn't change the file size during the week. They should be ok with running that. As you stated, our ultimate goal is to get the backup vendors to provide a means where you could configure their backup utility to trigger a new backup anytime we log the DBIID message. Presently I don't think many of them are at that point so you have to carefully schedule it.
Q: I tried this compact -REPLICA once on one database, and that database got corrupted. Every new document that was created saw the modified time three hours in the future and I finally had to restore from backup to resolve this problem. Have you seen this kind of problem?
A: We haven't seen that type of problem internally. We would love to have you open a PMR with IBM Support if you have a reproducible case where that happened. Especially if this is with the 9.0 shipped version. We haven't see a case where running compact -REPLICA on a database and it creates this corruption, we would like to see an example of that.
Q: I work mostly with IBM Notes clients, and I like the idea of being able to compact a database. I do try to run compact against local databases, like bookmark.nsf, names.nsf, and I love the idea where you can compact a database while it is in use. The names.nsf is commonly in use. Are you bringing this functionality down to the client using the nCompact command?
A. Yes, it will work on the client, even with the 9.0 client. We are actually investigating, as we do ODS changes in the future and some other things where we automatically will upgrade your local databases, to instead use the compact -REPLICA. It is not in there yet, but we are doing some testing and validation to see if that makes sense while you are upgrading, that you don't interfere with the user. The new replica would be created the next time they start their client and it will pick up the newer ODS level that was created. It will work in 9.0 with the client if you just ran the compact task from the command line of the client. We're looking into improving just using it in general as we do upgrades from major releases.
Q: My question is about databases that have documents with readers fields and documents that are hidden from specific users. If it happens by mistake or by any other reason, that the server.id is not allowed to view a document due to a readers field limitation, what would happen to compact replication. Would it take place and would those documents disappear?
A: They shouldn't. The server has access to those notes so it should be able to copy those notes in just like copy-style compaction does. We don't see an issue. Since you are the server, you should be able to. It's kind of like the readers list you're bringing locally on the client is a moot point, because your physical access to the database should override all that anyway. When the readers list is local, it is overridden anyway in things like views and stuff like that. So if your physical database is on the server, and you bring up your client, your client can see everything. That's just a known implementation of readers lists because you can do anything you want once you have the database physically there. It's not like an encryption security type thing, it's more of a view security thing.
Q: Is there a way to see what percent full the ID table currently is? I know you have some debug, but I don't know if it's in the shipped product or not.
A: There is, and we've seen an INI switch so we can report it, so certainly that can be an improvement.
Q: We are running an 8.5.3 client on the server, and we have a failover server that's getting pretty full at times. We are running a "-b" on a command line on the server itself. Do you recommend compacting the biggest mailfile database individually or just putting a command line -b on the server itself?
A: Each environment will be different, if you are using DBMT and you are doing the compact in a certain range, for example 9 PM to 7 AM, on large databases that have been around for 20 years without quotas on them, and if it does not fit in that window then those databases will be placed into the dbcompactfilter.ind so they won't be compacted anymore. What you would end up doing is schedule a separate DBMT on a certain day of the week that only does those specific mailfiles, say once a week or once every two weeks. There are no range restrictions when running that particular DBMT instance. That's how you could get around doing things like that where you have some extremely large databases. Once you have it configured with program documents you don't need to touch it and just periodically run the admin client and validate that the last compact time for all the mailfiles is keeping up. A suggestion would be every 10 days just for validation.
Q: There are only about 20 mail databases that I need to compact that are really very big. Do you recommend doing that on a weekend?
A. Yes, you don't want to do that during the day when the users are using their mailfiles. Again, the DBMT can run against a specific mailfile, it can be run against a diretctory, it can be run against an indirect file where you put those database names into a .IND file so that it's only running against those IND files. It's very powerful in the different ways that you can configure it. Just because they are the largest databases, this does not necessarily mean you are going to recover the most space because if people are not deleting any mail, there is not much to recover. It's just going to run the compact and come back with the same size. So you will want to encourage people to clean up their data or put quotas on them to help with that.
Q: Once we compact a database like a 10 or 15 gb database, how much percent will it go down to if we compact it?
A: Again, it only matters if they deleted data out of the database. You are only going to recover data that's been deleted as well it's out of their trash, meaning: you can set up their mail properties to say "remove data from trash within 48 hours" it will only remove data from the trash that is older than 2 days. It is all dependent on how they are using it so you would have to watch it. When the compact was run it will tell you how much space was recovered, but that's an after-the-fact thing, we cannot really tell you ahead of time before you do the compact of how much space is going to be recovered if you did run compact against a database. However there is, in the admin client, a percent used column that you can add to your files panel that shows you how much the database is used. When you start seeing things that are less than 90% used, it's a good time to compact the database. There is also a switch in compact that can say when the database is X percent full, go ahead and do the compact. It is not supported for DBMT however.
Technote 1635439: Error: "Unable To Extend an ID Table - Insufficient Memory" (new compact -REPLICA parameter)