Generally pick CHAR if all rows will have close to the same length Pick VARCHAR when the length varies significantly. CHAR may also be a bit faster because all the rows are of the same length.
Generally pick CHAR if all rows will have close to the same length. Pick VARCHAR when the length varies significantly. CHAR may also be a bit faster because all the rows are of the same length.It varies by DB implementation, but generally VARCHAR uses one or two more bytes of storage (for length or termination) in addition to the actual data.
So (assuming you are using a one byte character set) storing the word "FooBar" CHAR(6) = 6 bytes (no overhead) VARCHAR(10) = 7 bytes (1 byte of overhead) CHAR(10) = 10 bytes (4 bytes of overhead) Bottom line is CHAR can be faster and more space efficient for data of relatively the same length (within two characters difference).
If you're working with me and you're working with Oracle, I would probably make you use varchar in almost every circumstance. The assumption that char uses less processing power than varchar may be true...for now...but database engines get better over time and this sort of general rule has the making of a future "myth". Another thing: I have never seen a performance problem because someone decided to go with varchar.
You will make much better use of your time writing good code (fewer calls to the database) and efficient SQL (how do indexes work, how does the optimizer make decisions, why is exists faster than in usually...). Final thought: I have seen all sorts of problems with use of CHAR, people looking for '' when they should be looking for ' ', or people looking for 'FOO' when they should be looking for 'FOO (bunch of spaces here)', or people not trimming the trailing blanks, or bugs with Powerbuilder adding up to 2000 blanks to the value it returns from an Oracle procedure.
Sorry, just a pet peeve: It should start out with "If you're working with me and you're working with Oracle..." – thursdaysgeek Nov 4 '08 at 21:29 1 I disagree somewhat with your first paragraph, since char may provide a hint that could be useful to optimizers, even future ones, and it may help to communicate the intent of the column. But +1 for your third paragraph. I hate all the extra spaces.
A field should just store whatever I put in it without all the explicative padding. Basically, I just use char if all of the data is to be exactly the same length, no more and no less, now and forever. This is very rare, of course, and is usually a char(1).
– Jeffrey L Whitledge Dec 10 '09 at 17:34.
Char is a little bit faster, so if you have a column that you KNOW will be a certain length, use char. For example, storing (M)ale/(F)emale/(U)nknown for gender, or 2 characters for a US state.
Not sure that's a GREAT answer, since an ENUM would usually make a lot more sense, although I'm not sure how widely supported that type is (outside of MySQL). – Bobby Jack Sep 14 '08 at 18:26 Seems to me that the set of states is not necessarily immutable, so char(2) seems much more appropriate than an enum. – Kearns Sep 18 '08 at 13:17 @Bobby Jack - I don't know the specific details of any particular SQL enum implementation, but keep in mind that an enum stored as a 4 byte integer might require more space than a char(1) or char(2) column with the same data.
There is a sense in which enums are more logical in terms of their interpretation, and that might be compelling, but everything in an RDBMS system is abstract at some level and subject to the predicates defined for the tables. – Jeffrey L Whitledge Dec 10 '09 at 17:28 Bad example, ENUM is best for that case. Better example would be a 3 letter IATA airport code – Andrew G.
Johnson Jan 21 '10 at 19:44 1 @Andrew, not all db's support ENUM data types. MSSQLServer, for example, does not. Also, an ENUM, stored as an int, takes 4 bytes.
CHAR(1) takes 1 byte, and NCHAR(1) takes 2 bytes. – Jarrett Meyer Jan 21 '10 at 20:58.
There is a difference between early performance optimization and using a best practice type of rule. If you are creating new tables where you will always have a fixed length field, it makes sense to use CHAR, you should be using it in that case. This isn't early optimization, but rather implementing a rule of thumb (or best practice).I.e.
- If you have a 2 letter state field, use CHAR(2). If you have a field with the actual state names, use VARCHAR.
In addition to performance benefits, CHAR can be used to indicate that all values should be the same length, e.g. , a column for U.S. state abbreviations.
I would choose varchar unless the column stores fixed value like US state code -- which is always 2 chars long and the list of valid US states code doesn't change often :). In every other case, even like storing hashed password (which is fixed length), I would choose varchar. Why -- char type column is always fulfilled with spaces, which makes for column my_column defined as char(5) with value 'ABC' inside comparation: my_column = 'ABC' -- my_column stores 'ABC ' value which is different then 'ABC' false.
This feature could lead to many irritating bugs during development and makes testing harder.
1 At least in MSSQL Server, 'abc ' = 'abc'. I've never quite figured out if I like or detest that feature.... – Mark Brackett Oct 23 '08 at 16:59.
Data to store: "ABCD" Char(4) takes 4b varchar(4) takes 6b or Char(40) takes 40b varchar(40) takes 6b or Char(400) takes 400b varchar(400) takes 402b.
Hope this helps you to under stand when to use what – Sunil Kumar Lalwani Jan 21 '10 at 19:44 4 Mistake: "ABCD" stored in a VARCHAR(400) will also only take 6 bytes. – Guy Mar 22 '10 at 7:13.
CHAR takes up less storage space than VARCHAR if all your data values in that field are the same length. Now perhaps in 2009 a 800GB database is the same for all intents and purposes as a 810GB if you converted the VARCHARs to CHARs, but for short strings (1 or 2 characters), CHAR is still a industry "best practice" I would say. Now if you look at the wide variety of data types most databases provide even for integers alone (bit, tiny, int, bigint), there ARE reasons to choose one over the other.
Simply choosing bigint every time is actually being a bit ignorant of the purposes and uses of the field. If a field simply represents a persons age in years, a bigint is overkill. Now it's not necessarily "wrong", but it's not efficient.
But its an interesting argument, and as databases improve over time, it could be argued CHAR vs VARCHAR does get less relevant.
Great question. The simple answer is yes in certain situations. Let's see if this can be explained.
Obviously we all know that if I create a table with a column of varchar(255) (let's call this column myColumn) and insert a million rows but put only a few characters into myColumn for each row, the table will be much smaller (overall number of data pages needed by the storage engine) than if I had created myColumn as char(255). Anytime I do an operation (DML) on that table and request alot of rows, it will be faster when myColumn is varchar because I don't have to move around all those "extra" spaces at the end. Move, as in when SQL Server does internal sorts such as during a distinct or union operation, or if it chooses a merge during it's query plan, etc. Move could also mean the time it takes to get the data from the server to my local pc or to another computer or wherever it is going to be consumed.
But there is some overhead in using varchar. SQL Server has to use a two byte indicator (overhead) to, on each row, to know how many bytes that particular row's myColumn has in it. It's not the extra 2 bytes that presents the problem, it's the having to "decode" the length of the data in myColumn on every row.
In my experiences it makes the most sense to use char instead of varchar on columns that will be joined to in queries. For example the primary key of a table, or some other column that will be indexed. CustomerNumber on a demographic table, or CodeID on a decode table, or perhaps OrderNumber on an order table.
By using char, the query engine can more quickly perform the join because it can do straight pointer arithmetic (deterministically) rather than having to move it's pointers a variable amount of bytes as it reads the pages. I know I might have lost you on that last sentence. Joins in SQL Server are based around the idea of "predicates.
" A predicate is a condition. For example myColumn = 1, or OrderNumber All that takes time. Now bear in mind this can easily be poorly implemented.
I have seen char used for primary key fields in online systems. The width must be kept small i.e. Char(15) or something reasonable.
And it works best in online systems because you are usually only retrieving or upserting a small number of rows, so having to "rtrim" those trailing spaces you'll get in the result set is a trivial task as opposed to having to join millions of rows from one table to millions of rows on another table. Another reason CHAR makes sense over varchar on online systems is that it reduces page splits. By using char, you are essentially "reserving" (and wasting) that space so if a user comes along later and puts more data into that column SQL has already allocated space for it and in it goes.
Another reason to use CHAR is similar to the second reason. If a programmer or user does a "batch" update to millions of rows, adding some sentence to a note field for example, you won't get a call from your DBA in the middle of the night wondering why their drives are full. In other words, it leads to more predictable growth of the size of a database.
So those are 3 ways an online (OLTP) system can benefit from char over varchar. I hardly ever use char in a warehouse/analysis/OLAP scenario because usually you have SO much data that all those char columns can add up to lots of wasted space. Keep in mind that char can make your database much larger but most backup tools have data compression so your backups tend to be about the same size as if you had used varchar.
For example LiteSpeed or RedGate SQL Backup. Another use is in views created for exporting data to a fixed width file. Let's say I have to export some data to a flat file to be read by a mainframe.
It is fixed width (not delimited). I like to store the data in my "staging" table as varchar (thus consuming less space on my database) and then use a view to CAST everything to it's char equivalent, with the length corresponding to the width of the fixed width for that column. For example: create table tblStagingTable ( pkID BIGINT (IDENTITY,1,1), CustomerFirstName varchar(30), CustomerLastName varchar(30), CustomerCityStateZip varchar(100), CustomerCurrentBalance money ) insert into tblStagingTable (CustomerFirstName,CustomerLastName, CustomerCityStateZip) ('Joe','Blow','123 Main St Washington, MD 12345', 123.45) create view vwStagingTable AS SELECT CustomerFirstName = CAST(CustomerFirstName as CHAR(30)), CustomerLastName = CAST(CustomerLastName as CHAR(30)), CustomerCityStateZip = CAST(CustomerCityStateZip as CHAR(100)), CustomerCurrentBalance = CAST(CAST(CustomerCurrentBalance as NUMERIC(9,2)) AS CHAR(10)) SELECT * from vwStagingTable This is cool because internally my data takes up less space because it's using varchar.
But when I use DTS or SSIS or even just a cut and paste from SSMS to Notepad, I can use the view and get the right number of trailing spaces.In DTS we used to have a feature called, damn I forget I think it was called "suggest columns" or something. In SSIS you can't do that anymore, you have to tediously define the flat file connection manager. But since you have your view setup, SSIS can know the width of each column and it can save alot of time when building your data flow tasks.So bottom line... use varchar.
There are a very small number of reasons to use char and it's only for performance reasons. If you have a system with hundrends of millions of rows you will see a noticeable difference if the predicates are deterministic (char) but for most systems using char is simply wasting space. Hope that helps.
Jeff.
There is some small processing overhead in calculating the actual needed size for a column value and allocating the space for a Varchar, so if you are definitely sure how long the value will always be, it is better to use Char and avoid the hit.
When using varchar values SQL Server needs an additional 2 bytes per row to store some info about that column whereas if you use char it doesn't need that so unless you.
In some SQL databases, VARCHAR will be padded out to its maximum size in order to optimize the offsets, This is to speed up full table scans and indexes. Because of this, you do not have any space savings by using a VARCHAR(200) compared to a CHAR(200).
What you describe normally applies to CHAR, not VARCHAR. – Richard Simões Jul 29 '09 at 18:57 mysql will convert varchar's to chars if there are char's and varchar's in the same table. – Malfist Sep 16 '09 at 21:26.
I stand by Jim McKeeth's comment. Also, indexing and full table scans are faster if your table has only CHAR columns. Basically the optimizer will be able to predict how big each record is if it only has CHAR columns, while it needs to check the size value of every VARCHAR column.
Besides if you update a VARCHAR column to a size larger than its previous content you may force the database to rebuild its indexes (because you forced the database to physically move the record on disk). While with CHAR columns that'll never happen. But you probably won't care about the performance hit unless your table is huge.
Remember Djikstra's wise words. Early performance optimization is the root of all evil.
There is a degree of speculation in your comment. I have seen time and time again assumptions like these get tested and the exact opposite turn out to be true. The problem is many engineers will take info like this as the gospel.
Please folks, create test cases which reflect your real situations. – Ethan Post Sep 12 '08 at 18:53 Ethan is totally correct. This so depends on the implementation you're using that without references to actual (Product,Version) it's completely useless.
– David Schmitt Sep 13 '08 at 10:20.
It's the classic space versus performance tradeoff. In MS SQL 2005, Varchar (or NVarchar for lanuagues requiring two bytes per character ie Chinese) are variable length. If you add to the row after it has been written to the hard disk it will locate the data in a non-contigious location to the original row and lead to fragmentation of your data files.
This will affect performance. So, if space is not an issue then Char are better for performance but if you want to keep the database size down then varchars are better.
I think in your case there is probably no reason to not pick Varchar. It gives you flexibility and as has been mentioned by a number of respondants, performance is such now that except in very specific circumstances us meer mortals (as opposed to Google DBA's) will not notice the difference. An interesting thing worth noting when it comes to DB Types is the sqlite (a popular mini database with pretty impressive performance) puts everything into the database as a string and types on the fly.
I always use VarChar and usually make it much bigger than I might strickly need. Eg. 50 for Firstname, as you say why not just to be safe.
Many people have pointed out that if you know the exact length of the value using CHAR has some benefits. But while storing US states as CHAR(2) is great today, when you get the message from sales that 'We have just made our first sale to Australia', you are in a world of pain. I always send to overestimate how long I think fields will need to be rather than making an 'exact' guess to cover for future events.
VARCHAR will give me more flexibility in this area.
There are performance benefits, but here is one that has not been mentioned: row migration. With char, you reserve the entire space in advance. So let's says you have a char(1000), and you store 10 characters, you will use up all 1000 charaters of space.In a varchar2(1000), you will only use 10 characters.
The problem comes when you modify the data. Let's say you update the column to now contain 900 characters.It is possible that the space to expand the varchar is not available in the current block. In that case, the DB engine must migrate the row to another block, and make a pointer in the original block to the new row in the new block.
To read this data, the DB engine will now have to read 2 blocks.No one can equivocally say that varchar or char are better. There is a space for time tradeoff, and consideration of whether the data will be updated, especially if there is a good chance that it will grow.
– Matt Rogish Sep 13 '08 at 2:37 No, that's correct – Toby Allen May 15 '11 at 13:21 matt you are right...edited – Tony BenBrahim May 19 '11 at 5:23.
That doesn't make much sense. What you're saying is basically 'why optimize my code when it works? There may be performance benefits, but who cares about that?
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.