jump to navigation

DD510 and Backup Exec cross-domain backup June 29, 2009

Posted by jamesisaac in Uncategorized.

We purchased one Data Domain DD510 appliance, which I intend to use as a target for backing up our data at the data center. This will replace three separate servers using BackupExec and Ultrium tape drives. I ran into a snag because there are two separate domains in our environment.

I initially installed the DD510 in “Active Directory” mode, which used the LDAP connector to authenticate into our AD. No problems there – everything worked fine and I could set security and map shares from any server in the joined domain. However, BackupExec in the other domain refused to allow me to create a “backup-to-disk” folder on the DD510. Apparently this is aknown issue, as googling for “Backup Exec backup to disk access denied” returns many links.

I tried changing the Backup Exec services accounts to use pass-thru authentication and even tinkered with trusting across domains, but had no luck until I removed the DD510 from our domain and put it back into Workgroup authentication. After that, BE worked like a charm.

The key is to create a backup user on the DD510, create local users on whatever servers BE is running on with the same username and password, and then set BE to use that username and password for the services. So now the DD510 is back to being a backup appliance instead of a general-purpose file server repository – which is a little less flexible, but probably more controllable.

After running backups for a week onto the device, I am suitably impressed. Backup-to-disk is much faster than even the local Ultrium tape drive that I was using, and the dedupe reduces each additional full backup by 95% as promised. YMMV, of course – what remains is the delta between the two backups, which the on-disk compression reduces even further.

One remaining issue is that we have several folders full of many small files (like hundreds of thousands of small files), and performance is abysmal when backing up those files. I suppose it’s due to the overhead of all the security descriptors and other metadata that each file carries with it. I’m going to investigate doing an image backup instead of a file-by-file backup and see if that gives us the necessary performance.



No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: