Linq-To-SQL CreateDatabase doesn't create stored procedures?

"YOU AND THE ART OF ONLINE DATING" is the only product on the market that will take you step-by-step through the process of online dating, provide you with the resources to help ensure success. Get it now!

The DataContext. CreateDatabase method creates a replica of the database only to the extent of the information encoded in the object model. Mapping files and attributes from your object model might not encode everything about the structure of an existing database Mapping information does not represent the contents of user-defined functions, stored procedures, triggers, or check constraints This behavior is sufficient for a variety of databases SPs not part of that msdn.microsoft.com/en-us/library/bb39942....

The DataContext. CreateDatabase method creates a replica of the database only to the extent of the information encoded in the object model. Mapping files and attributes from your object model might not encode everything about the structure of an existing database.

Mapping information does not represent the contents of user-defined functions, stored procedures, triggers, or check constraints. This behavior is sufficient for a variety of databases. SPs not part of that msdn.microsoft.com/en-us/library/bb39942....

To my knowledge stored procedures must be declared in the Sql Server Management Studio (or such tool) and can'e be done via LINQ.

Summary: LINQ to SQL provides a runtime infrastructure for managing relational data as objects without losing the ability to query. Your application is free to manipulate the objects while LINQ to SQL stays in the background tracking your changes automatically. Most programs written today manipulate data in one way or another and often this data is stored in a relational database.

Yet there is a huge divide between modern programming languages and databases in how they represent and manipulate information. This impedance mismatch is visible in multiple ways. Most notable is that programming languages access information in databases through APIs that require queries to be specified as text strings.

These queries are significant portions of the program logic. Yet they are opaque to the language, unable to benefit from compile-time verification and design-time features like IntelliSense. Of course, the differences go far deeper than that.

How information is represented—the data model—is quite different between the two. Modern programming languages define information in the form of objects. Relational databases use rows.

Objects have unique identity as each instance is physically different from another. Rows are identified by primary key values. Objects have references that identify and link instances together.

Rows are left intentionally distinct requiring related rows to be tied together loosely using foreign keys. Objects stand alone, existing as long as they are still referenced by another object. Rows exist as elements of tables, vanishing as soon as they are removed.

It is no wonder that applications expected to bridge this gap are difficult to build and maintain. It would certainly simplify the equation to get rid of one side or the other. Yet relational databases provide critical infrastructure for long-term storage and query processing, and modern programming languages are indispensable for agile development and rich computation.

Until now, it has been the job of the application developer to resolve this mismatch in each application separately. The best solutions so far have been elaborate database abstraction layers that ferry the information between the applications domain-specific object models and the tabular representation of the database, reshaping and reformatting the data each way. Yet by obscuring the true data source, these solutions end up throwing away the most compelling feature of relational databases; the ability for the data to be queried.

LINQ to SQL, a component of Visual Studio Code Name "Orcas", provides a run-time infrastructure for managing relational data as objects without losing the ability to query. It does this by translating language-integrated queries into SQL for execution by the database, and then translating the tabular results back into objects you define. Your application is then free to manipulate the objects while LINQ to SQL stays in the background tracking your changes automatically.

LINQ to SQL is designed to be non-intrusive to your application. It is possible to migrate current ADO.NET solutions to LINQ to SQL in a piecemeal fashion (sharing the same connections and transactions) since LINQ to SQL is simply another component in the ADO.NET family. LINQ to SQL also has extensive support for stored procedures, allowing reuse of the existing enterprise assets.

LINQ to SQL applications are easy to get started. Objects linked to relational data can be defined just like normal objects, only decorated with attributes to identify how properties correspond to columns. Of course, it is not even necessary to do this by hand.

A design-time tool is provided to automate translating pre-existing relational database schemas into object definitions for you. Together, the LINQ to SQL run-time infrastructure and design-time tools significantly reduce the workload for the database application developer. The following chapters provide an overview of how LINQ to SQL can be used to perform common database-related tasks.

It is assumed that the reader is familiar with Language-Integrated Query and the standard query operators. LINQ to SQL is language-agnostic. Any language built to provide Language-Integrated Query can use it to enable access to information stored in relational databases.

The samples in this document are shown in both C# and Visual Basic; LINQ to SQL can be used with the LINQ-enabled version of the Visual Basic compiler as well. The first step in building a LINQ to SQL application is declaring the object classes you will use to represent your application data. Let's walk through an example.

We will start with a simple class Customer and associate it with the customers table in the Northwind sample database. To do this, we need only apply a custom attribute to the top of the class declaration. LINQ to SQL defines the Table attribute for this purpose.

The Table attribute has a Name property that you can use to specify the exact name of the database table. If no Name property is supplied, LINQ to SQL will assume the database table has the same name as the class. Only instances of classes declared as tables will be stored in the database.

Instances of these types of classes are known as entities. The classes themselves are known as entity classes. In addition to associating classes to tables you will need to denote each field or property you intend to associate with a database column.

For this, LINQ to SQL defines the Column attribute. The Column attribute has a variety of properties you can use to customize the exact mapping between your fields and the database columns. One property of note is the Id property.

It tells LINQ to SQL that the database column is part of the primary key in the table. As with the Table attribute, you only need to supply information in the Column attribute if it differs from what can be deduced from your field or property declaration. In this example, you need to tell LINQ to SQL that the CustomerID field is part of the primary key in the table, yet you don't have to specify the exact name or type.

Only fields and properties declared as columns will be persisted to or retrieved from the database. Others will be considered as transient parts of your application logic. Each database table is represented as a Table collection, accessible via the GetTable() method using its entity class to identify it.

It is recommended that you declare a strongly typed DataContext instead of relying on the basic DataContext class and the GetTable() method. A strongly typed DataContext declares all Table collections as members of the context. We will continue to use the strongly typed Northwind class for the remainder of the overview document.

Relationships in relational databases are typically modeled as foreign key values referring to primary keys in other tables. To navigate between them, you must explicitly bring the two tables together using a relational join operation. Objects, on the other hand, refer to each other using property references or collections of references navigated using "dot" notation.

Obviously, dotting is simpler than joining, since you need not recall the explicit join condition each time you navigate. For data relationships such as these that will always be the same, it becomes quite convenient to encode them as property references in your entity class. LINQ to SQL defines an Association attribute you can apply to a member used to represent a relationship.

An association relationship is one like a foreign-key to primary-key relationship that is made by matching column values between tables. The Customer class now has a property that declares the relationship between customers and their orders. The Orders property is of type EntitySet because the relationship is one-to-many.

We use the OtherKey property in the Association attribute to describe how this association is done. It specifies the names of the properties in the related class to be compared with this one. There was also a ThisKey property we did not specify.

Normally, we would use it to list the members on this side of the relationship. However, by omitting it we allow LINQ to SQL to infer them from the members that make up the primary key. Notice how this is reversed in the definition for the Order class.

The Order class uses the EntityRef type to describe the relationship back to the customer. The use of the EntityRef class is required to support deferred loading (discussed later). The Association attribute for the Customer property specifies the ThisKey property since the non-inferable members are now on this side of the relationship.

Also take a look at the Storage property. It tells LINQ to SQL which private member is used to hold the value of the property. This allows LINQ to SQL to bypass your public property accessors when it stores and retrieves their value.

This is essential if you want LINQ to SQL to avoid any custom business logic written into your accessors. If the storage property is not specified, the public accessors will be used instead. You may use the Storage property with Column attributes as well.

Once you introduce relationships in your entity classes, the amount of code you need to write grows as you introduce support for notifications and graph consistency. Fortunately, there is a tool (described later) that can be used to generate all the necessary definitions as partial classes, allowing you to use a mix of generated code and custom business logic. For the rest of this document, we assume the tool has been used to generate a complete Northwind data context and all entity classes.

Now that you have relationships, you can use them when you write queries simply by referring to the relationship properties defined in your class. The above query uses the Orders property to form the cross product between customers and orders, producing a new sequence of Customer and Order pairs. It's also possible to do the reverse.

In this example, the orders are queried and the Customer relationship is used to access information on the associated Customer object. Few applications are built with only query in mind. Data must be created and modified, too.

LINQ to SQL is designed to offer maximum flexibility in manipulating and persisting changes made to your objects. As soon as entity objects are available—either by retrieving them through a query or constructing them anew—you may manipulate them as normal objects in your application, changing their values or adding and removing them from collections as you see fit. LINQ to SQL tracks all your changes and is ready to transmit them back to the database as soon as you are done.

The example below uses the Customer and Order classes generated by a tool from the metadata of the entire Northwind sample database. The class definitions have not been shown for brevity. When SubmitChanges() is called, LINQ to SQL automatically generates and executes SQL commands in order to transmit the changes back to the database.

It is also possible to override this behavior with custom logic. The custom logic may call a database stored procedure. LINQ to SQL provides an implementation of the standard query operators for objects associated with tables in a relational database.

This chapter describes the LINQ to SQL-specific aspects of queries. Whether you write a query as a high-level query expression or build one out of the individual operators, the query that you write is not an imperative statement executed immediately. It is a description.

For example, in the declaration below the local variable q refers to the description of the query not the result of executing it. The actual type of q in this instance is IQueryable. It's not until the application attempts to enumerate the contents of the query that it actually executes.

In this example the foreach statement causes the execution to occur. An IQueryable object is similar to an ADO.NET command object. Having one in hand does not imply that a query was executed.

A command object holds onto a string that describes a query. Likewise, an IQueryable object holds onto a description of a query encoded as a data structure known as an Expression. A command object has an ExecuteReader() method that causes execution, returning results as a DataReader.

An IQueryable object has a GetEnumerator() method that causes the execution, returning results as an IEnumerator. Therefore, it follows that if a query is enumerated twice it will be executed twice. This behavior is known as deferred execution.

Just like with an ADO.NET command object it is possible to hold onto a query and re-execute it. Of course, application writers often need to be very explicit about where and when a query is executed. It would be unexpected if an application were to execute a query multiple times simply because it needed to examine the results more than once.

For example, you may want to bind the results of a query to something like a DataGrid. The control may enumerate the results each time it paints on the screen. One benefit of deferred execution is that queries may be piecewise constructed with execution only occurring when the construction is complete.

You can start out composing a portion of a query, assigning it to a local variable and then sometime later continue applying more operators to it. In this example, q starts out as a query for all customers in London. Later on it changes into an ordered query depending on application state.

By deferring execution the query can be constructed to suit the exact needs of the application without requiring risky string manipulation. Objects in the runtime have unique identity. If two variables refer to the same object, they are actually referring to the same object instance.

Because of this, changes made via a path through one variable are immediately visible through the other. Rows in a relational database table do not have unique identity. However, they do have a primary key and that primary key may be unique, meaning no two rows may share the same key.

Yet this only constrains the contents of the database table. Therefore, as long as we only interact with the data through remote commands, it amounts to about the same thing. However, this is rarely the case.

Most often data is brought out of the database and into a different tier where an application manipulates it. Clearly, this is the model that LINQ to SQL is designed to support. When the data is brought out of the database as rows, there is no expectation that two rows representing the same data actually correspond to the same row instances.

If you query for a specific customer twice, you get two rows of data, each containing the same information. Yet with objects, you expect something quite different. You expect that if you ask the DataContext for the same information again, it will in fact give you back the same object instance.

You expect this because objects have special meaning for your application and you expect them to behave like normal objects. You designed them as hierarchies or graphs and you certainly expect to retrieve them as such, without hordes of replicated instances merely because you asked for the same thing twice. Because of this, the DataContext manages object identity.

Whenever a new row is retrieved from the database, it is logged in an identity table by its primary key and a new object is created. Whenever that same row is retrieved again, the original object instance is handed back to the application. In this way, the DataContext translates the databases concept of identity (keys) into the languages concept (instances).

The application only ever sees the object in the state that it was first retrieved. The new data, if different, is thrown away. You might be puzzled by this, since why would any application throw data away?

As it turns out this is how LINQ to SQL manages integrity of the local objects and is able to support optimistic updates. Since the only changes that occur after the object is initially created are those made by the application, the intent of the application is clear. If changes by an outside party have occurred in the interim they will be identified at the time SubmitChanges() is called.

More of this is explained in the Simultaneous Changes section. Note that, in the case that the database contains a table without a primary key, LINQ to SQL allows queries to be submitted over the table, but it doesn't allow updates. This is because the framework cannot identify which row to update given the lack of a unique key.

Of course, if the object requested by the query is easily identifiable by its primary key as one already retrieved no query is executed at all. The identity table acts as a cache storing all previously retrieved objects. As we saw in the quick tour, references to other objects or collections of other objects in your class definitions directly correspond to foreign-key relationships in the database.

You can use these relationships when you query by simply using dot notation to access the relationship properties, navigating from one object to another. These access operations translate to more complicated joins or correlated sub-queries in the equivalent SQL, allowing you to walk through your object graph during a query. For example, the following query navigates from orders to customers as a way to restrict the results to only those orders for customers located in London.

If relationship properties did not exist you would have to write them out manually as joins just as you would do in a SQL query. The relationship property allows you to define this particular relationship once enabling the use of the more convenient dot syntax. However, this is not the reason why relationship properties exist.

They exist because we tend to define our domain-specific object models as hierarchies or graphs. The objects we choose to program against have references to other objects. It's only a happy coincidence that since object-to-object relationships correspond to foreign key style relationships in databases that property access leads to a convenient way to write joins.

Therefore, the existence of relationship properties is more important on the results side of a query than as part of the query itself. Once you have your hands on a particular customer, its class definition tells you that customers have orders. So when you look into the Orders property of a particular customer you expect to see the collection populated with all the customer's orders, since that is in fact the contract you declared by defining the classes this way.

You expect to see the orders there even if you did not particularly ask for orders up front. You expect your object model to maintain an illusion that it is an in-memory extension of the database, with related objects immediately available. LINQ to SQL implements a technique called deferred loading in order to help maintain this illusion.

When you query for an object you actually only retrieve the objects you asked for. The related objects are not automatically fetched at the same time. However, the fact that the related objects are not already loaded is not observable since as soon as you attempt to access them a request goes out to retrieve them.

For example, you may want to query for a particular set of orders and then only occasionally send an email notification to particular customers. You would not necessary need to retrieve all customer data up front with every order. Deferred loading allows you to defer the cost of retrieving extra information until you absolutely have to.

Of course, the opposite might also be true. You might have an application that needs to look at customer and order data at the same time. You know you need both sets of data.

You know your application is going to drill down through each customer's orders as soon as you get them. It would be unfortunate to fire off individual queries for orders for every customer. What you really want to happen is to have the order data retrieved together with the customers.

Certainly, you can always find a way to join customers and orders together in a query by forming the cross product and retrieving all the relative bits of data as one big projection. But then the results would not be entities. Entities are objects with identity that you can modify while the results would be projections that cannot be changed and persisted.

Worse, you would be retrieving a huge amount of redundant data as each customer repeats for each order in the flattened join output. What you really need is a way to retrieve a set of related objects at the same time—a delineated portion of a graph so you would never be retrieving any more or any less than was necessary for your intended use. LINQ to SQL allows you to request immediate loading of a region of your object model for just this reason.

It does this by allowing the specification of a DataShape for a DataContext. The DataShape class is used to instruct the framework about which objects to retrieve when a particular type is retrieved. In the previous query, all the Orders for all the Customers who live in London are retrieved when the query is executed, so that successive access to the Orders property on a Customer object doesn't trigger a database query.

The DataShape class can also be used to specify sub-queries that are applied to a relationship navigation. In the previous code, the inner foreach statement iterates just over the Orders that have been shipped today, because just such orders have been retrieved from the database. After assigning a DataShape to a DataContext, the DataShape cannot be modified.

Any LoadWith or AssociateWith method call on such a DataShape will return an error at run time. It is impossible to create cycles by using LoadWith or AssociateWith. Function(cust As Customer) From ord In cust.

Most queries against object models heavily rely on navigating object references in the object model. However, there are interesting "relationships" between entities that may not be captured in the object model as references. For example Customer.

Orders is a useful relationship based on foreign key relationships in the Northwind database. However, Suppliers and Customers in the same City or Country is an ad hoc relationship that is not based on a foreign key relationship and may not be captured in the object model. Joins provide an additional mechanism to handle such relationships.

LINQ to SQL supports the new join operators introduced in LINQ. Consider the following problem—find suppliers and customers based in the same city. The following query returns supplier and customer company names and the common city as a flattened result.

The above query eliminates suppliers that are not in the same city as a certain customer. However, there are times when we don't want to eliminate one of the entities in an ad hoc relationship. The following query lists all suppliers with groups of customers for each of the suppliers.

If a particular supplier does not have any customer in the same city, the result is an empty collection of customers corresponding to that supplier. Note that the results are not flat—each supplier has an associated collection. Effectively, this provides group join—it joins two sequences and groups elements of the second sequence by the elements of the first sequence.

Group join can be extended to multiple collections as well. The following query extends the above query by listing employees that are in the same city as the supplier. Here, the result shows a supplier with (possibly empty) collections of customers and employees.

The results of a group join can also be flattened. The results of flattening the group join between suppliers and customers are multiple entries for suppliers with multiple customers in their city—one per customer. Empty collections are replaced with nulls.

This is equivalent to a left outer equi-join in relational databases. The signatures for underlying join operators are defined in the standard query operators document. Only equi-joins are supported and the two operands of equals must have the same type.

So far, we have only looked at queries for retrieving entities—objects directly associated with database tables. We need not constrain ourselves to just this. The beauty of a query language is that you can retrieve information in any form you want.

You will not be able to take advantage of automatic change tracking or identity management when you do so. However, you can get just the data you want. For example, you may simply need to know the company names of all customers in London.

If this is the case there is no particular reason to retrieve entire customer objects merely to pick out names. You can project out the names as part of the query. In this case, q becomes a query that retrieves a sequence of strings.

If you want to get back more than just a single name, but not enough to justify fetching the entire customer object, you can specify any subset you want by constructing the results as part of your query. This example uses an anonymous object initializer to create a structure that holds both the company name and phone number. You may not know what to call the type, but with implicitly typed local variable declaration in the language you do not necessarily need to.

If you are consuming the data immediately, anonymous types make a good alternative to explicitly defining classes to hold your query results. You can also form cross products of entire objects, though you might rarely have a reason to do so. This query constructs a sequence of pairs of customer and order objects.

It's also possible to make projections at any stage of the query. You can project data into newly constructed objects and then refer to those objects' members in subsequent query operations. Be wary of using parameterized constructors at this stage, though.

It is technically valid to do so, yet it is impossible for LINQ to SQL to track how constructor usage affects member state without understanding the actual code inside the constructor. Because LINQ to SQL attempts to translate the query into pure relational SQL locally defined object types are not available on the server to actually construct. All object construction is actually postponed until after the data is retrieved back from the database.

In place of actual constructors, the generated SQL uses normal SQL column projection. Since it is not possible for the query translator to understand what is happening during a constructor call, it is unable to establish a meaning for the Name field of MyType. Instead, the best practice is to always use object initializers to encode projections.

The only safe place to use a parameterized constructor is in the final projection of a query. You can even use elaborate nesting of object constructors if you desire, like this example that constructs XML directly out of the result of a query. It works as long as it's the last projection of the query.

Still, even if constructor calls are understood, calls to local methods may not be. If your final projection requires invocation of local methods, it is unlikely that LINQ to SQL will be able to oblige. Method calls that do not have a known translation into SQL cannot be used as part of the query.

One exception to this rule is method calls that have no arguments dependent on query variables. These are not considered part of the translated query and instead are treated as parameters. Still elaborate projections (transformations) may require local procedural logic to implement.

For you to use your own local methods in a final projection you will need to project twice. The first projection extracts all the data values you'll need to reference and the second projection performs the transformation. In between these two projections is a call to the AsEnumerable() operator that shifts processing at that point from a LINQ to SQL query into a locally executed one.

Note   The AsEnumerable() operator, unlike ToList() and ToArray(), does not cause execution of the query. It is still deferred. The AsEnumerable() operator merely changes the static typing of the query, turning a IQueryable (IQueryable (ofT) in Visual Basic) into an IEnumerable (IEnumerable (ofT) in Visual Basic), tricking the compiler into treating the rest of the query as locally executed.

It is common in many applications to execute structurally similar queries many times. In such cases, it is possible to increase performance by compiling the query once and executing it several times in the application with different parameters. This result is obtained in LINQ to SQL by using the CompiledQuery class.

The Compile method returns a delegate that can be cached and executed afterward several times by just changing the input parameters. LINQ to SQL does not actually execute queries; the relational database does. LINQ to SQL translates the queries you wrote into equivalent SQL queries and sends them to the server for processing.

Because execution is deferred, LINQ to SQL is able to examine your entire query even if assembled from multiple parts. Since the relational database server is not actually executing IL (aside from the CLR integration in SQL Server 2005); the queries are not transmitted to the server as IL. They are in fact transmitted as parameterized SQL queries in text form.

Of course, SQL—even T-SQL with CLR integration—is incapable of executing the variety of methods that are locally available to your program. Therefore the queries you write must be translated into equivalent operations and functions that are available inside the SQL environment. Most methods and operators on .

Net Framework built-in types have direct translations into SQL. Some can be produced out of the functions that are available. The ones that cannot be translated are disallowed, generating run-time exceptions if you try to use them.

There is a section later in the document that details the framework methods that are implemented to translate into SQL. LINQ to SQL is more than just an implementation of the standard query operators for relational databases. In addition to translating queries, it is a service that manages your objects throughout their lifetime, aiding you in maintaining the integrity of your data and automating the process of translating your modifications back into the store.

In a typical scenario, objects are retrieved through one or more queries and then manipulated in some way or another until the application is ready to send the changes back to the server. This process may repeat a number of times until the application no longer has use for this information. At that point, the objects are reclaimed by the runtime just like normal objects.

The data, however, remains in the database. Even after being erased from their run-time existence, objects representing the same data can still be retrieved. In this sense, the object's true lifetime exists beyond any single run-time manifestation.

The focus of this chapter is the entity lifecycle where a cycle refers to the time span of a single manifestation of an entity object within a particular run-time context. The cycle starts when the DataContext becomes aware of a new instance and ends when the object or DataContext is no longer needed. LINQ to SQL starts tracking your entities the moment they are retrieved from the database, before you ever lay your hands on them.

Indeed, the identity management service discussed earlier has already kicked in as well. Change tracking costs very little in additional overhead until you actually start making changes. As soon as the CompanyName is assigned in the example above, LINQ to SQL becomes aware of the change and is able to record it.

The original values of all data members are retained by the change tracking service. The change tracking service also records all manipulations of relationship properties. You use relationship properties to establish the links between your entities, even though they may be linked by key values in the database.

There is no need to directly modify the members associated with the key columns. LINQ to SQL automatically synchronizes them for you before the changes are submitted. You can move orders from one customer to another by simply making an assignment to their Customer property.

Since the relationship exists between the customer and the order, you can change the relationship by modifying either side. You could have just as easily removed them from the Orders collection of cust2 and added them to the orders collection of cust1, as shown below. Of course, if you assign a relationship the value of null, you are in fact getting rid of the relationship completely.

Assigning a Customer property of an order to null actually removes the order from the customer's list. Automatic updating of both sides of a relationship is essential for maintaining consistency of your object graph. Unlike normal objects, relationships between data are often bidirectional.

LINQ to SQL allows you to use properties to represent relationships. However, it does not offer a service to automatically keep these bidirectional properties in sync. This is a level of service that must be baked directly into your class definitions.

Entity classes generated using the code generation tool have this capability. In the next chapter, we will show you how to do this to your own handwritten classes. It is important to note, however, that removing a relationship does not imply that an object has been deleted from the database.

Remember, the lifetime of the underlying data persists in the database until the row has been deleted from the table. The only way to actually delete an object is to remove it from its Table collection. Like with all other changes, the order has not actually been deleted.

It just looks that way to us since it has been removed and detached from the rest of our objects. When the order object was removed from the Orders table, it was marked for deletion by the change tracking service. The actually deletion from the database will occur when the changes are submitted on a call to SubmitChanges().

Note that the object itself is never deleted. The runtime manages the lifetime of object instances, so it sticks around as long as you are still holding a reference to it. However, after an object has been removed from its Table and changes submitted it is no longer tracked by the change tracking service.

The only other time an entity is left untracked is when it exists before the DataContext is aware of it. This happens whenever you create new objects in your code. You are free to use instances of entity classes in your application without ever retrieving them from a database.

Change tacking and identity management only apply to those objects that the DataContext is aware of. Therefore neither service is enabled for newly created instances until you add them to the DataContext. This can occur in one of two ways.

You can call the Add() method on the related Table collection manually. Alternatively, you can attach a new instance to an object that the DataContext is already aware of. The DataContext will discover your new object instances even if they are attached to other new instances.

Basically, the DataContext will recognize any entity in your object graph that is not currently tracked as a new instance, whether or not you called the Add() method. Many scenarios don't necessitate updating the entities retrieved from the database. Showing a table of Customers on a Web page is one obvious example.

In all such cases, it is possible to improve performance by instructing the DataContext not to track the changes to the entities. Regardless of how many changes you make to your objects, those changes were only made to in-memory replicas. Nothing has yet happened to the actual data in the database.

Transmission of this information to the server will not happen until you explicitly request it by calling SubmitChanges() on the DataContext. When you do call SubmitChanges(), the DataContext will attempt to translate all your changes into equivalent SQL commands, inserting, updating, or deleting rows in corresponding tables. These actions can be overridden by your own custom logic if you desire, however the order of submission is orchestrated by a service of the DataContext known as the change processor.

The first thing that happens when you call SubmitChanges() is that the set of known objects are examined to determine if new instances have been attached to them. These new instances are added to the set of tracked objects. Next, all objects with pending changes are ordered into a sequence of objects based on dependencies between them.

Those objects whose changes depend on other objects are sequenced after their dependencies. Foreign key constraints and uniqueness constraints in the database play a big part in determining the correct ordering of changes. Then, just before any actual changes are transmitted, a transaction is started to encapsulate the series of individual commands unless one is already in scope.

Finally, one by one the changes to the objects are translated into SQL commands and sent to the server. At this point, any errors detected by the database will cause the submission process to abort and an exception will be raised. All changes to the database will be rolled back as if none of the submissions ever took place.

The DataContext will still have a full recording of all changes so it is possible to attempt to rectify the problem and resubmit them by calling SubmitChanges() again. When the transaction around the submission completes successfully, the DataContext will accept the changes to the objects by simply forgetting the change tracking information. There are a variety of reasons why a call to SubmitChanges() may fail.

You may have created an object with an invalid primary key; one that's already in use, or with a value that violates some check constraint of the database. These kinds of checks are difficult to bake into business logic since they often require absolute knowledge of the entire database state. However, the most likely reason for failure is simply that someone else made changes to the objects before you.

Certainly, this would be impossible if you were locking each object in the database and using a fully serialized transaction. However, this style of programming (pessimistic concurrency) is rarely used since it is expensive and true clashes seldom occur. The most popular form of managing simultaneous changes is to employ a form of optimistic concurrency.

In this model, no locks against the database rows are taken at all. That means any number of changes to the database could have occurred between the time you first retrieved your objects and the time you submitted your changes. Therefore, unless you want to go with a policy that the last update wins, wiping over whatever else occurred before you, you probably want to be alerted to the fact that the underlying data was changed by someone else.

The DataContext has built-in support for optimistic concurrency by automatically detecting change conflicts. Individual updates only succeed if the database's current state matches the state you understood the data to be in when you first retrieved your objects. This happens on a per object basis, only alerting you to violations if they happen to objects you have made changes to.

You can control the degree to which the DataContext detects change conflicts when you define your entity classes. Each Column attribute has a property called UpdateCheck that can be assigned one of three values: Always, Never, and WhenChanged. If not set the default for a Column attribute is Always, meaning the data values represented by that member are always checked for conflicts, that is, unless there is an obvious tie-breaker like a version stamp.

A Column attribute has an IsVersion property that allows you to specify whether the data value constitutes a version stamp maintained by the database. If a version exists, then the version is used alone to determine if a conflict has occurred. When a change conflict does occur, an exception will be thrown just as if it were any other error.

The transaction surrounding the submission will abort, yet the DataContext will remain the same, allowing you the opportunity to rectify the problem and try again. If you are making changes on a middle-tier or server, the easiest thing you can do to rectify a change conflict is to simply start over and try again, recreating the context and reapplying the changes. Additional options are described in the following section.

A transaction is a service provided by databases or any other resource manager that can be used to guarantee that a series of individual actions occur automatically; meaning either they all succeed or they all don't. If they don't, then they are also all automatically undone before anything else is allowed to happen. If no transaction is already in scope, the DataContext will automatically start a database transaction to guard updates when you call SubmitChanges().

You may choose to control the type of transaction used, its isolation level or what it actually encompasses by initiating it yourself. The transaction isolation that the DataContext will use is known as ReadCommitted. The example above initiates a fully serialized transaction by creating a new transaction scope object.

All database commands executed within the scope of the transaction will be guarded by the transaction. This modified version of the same example uses the ExecuteCommand() method on the DataContext to execute a stored procedure in the database right before the changes are submitted. Regardless of what the stored procedure does to the database, we can be certain its actions are part of the same transaction.

If the transaction completes successfully, the DataContext throws out all the accumulated tracking information and treats the new states of the entities as unchanged. It does not, however, rollback the changes to your objects if the transaction fails. This allows you the maximum flexibility in dealing with problems during change submission.

It is also possible to use a local SQL transaction instead of the new TransactionScope. LINQ to SQL offers this capability to help you integrate LINQ to SQL features into pre-existing ADO.NET applications. However, if you go this route you will need to be responsible for much more.

As you can see, using a manually controlled database transaction is a bit more involved. Not only do you have to start it yourself, you have to tell the DataContext explicitly to use it by assigning it to the Transaction property. Then you must use a try-catch block to encase your submit logic, remembering to explicitly tell the transaction to commit and to explicitly tell the DataContext to accept changes, or to abort the transactions if there is failure at any point.

Also, don't forget to set the Transaction property back to null when you are done. When SubmitChanges() is called, LINQ to SQL generates and executes SQL commands to insert, update, and delete rows in the database. These actions can be overridden by application developers and in their place custom code can be used to perform the desired actions.

In this way, alternative facilities like database-stored procedures can be invoked automatically by the change processor. Consider a stored procedure for updating the units in stock for the Products table in the Northwind sample database. The SQL declaration of the procedure is as follows.

You can use the stored procedure instead of the normal auto-generated update command by defining a method on your strongly typed DataContext. Even if the DataContext class is being auto-generated by the LINQ to SQL code generation tool, you can still specify these methods in a partial class of your own. The signature of the method and the generic parameter tells the DataContext to uses this method in place of a generated update statement.

The original and current parameters are used by LINQ to SQL for passing in the original and current copies of the object of the specified type. The two parameters are available for optimistic concurrency conflict detection. Note   If you override the default update logic, conflict detection is your responsibility.

The stored procedure UpdateProductStock is invoked using the ExecuteCommand() method of the DataContext. The object array is used for passing parameters required for executing the command. Similar to the update method, insert and delete methods may be specified.

Insert and delete methods take only one parameter of the entity type to be updated. An entity class is just like any normal object class that you might define as part of your application, except that it is annotated with special information that associates it with a particular database table. These annotations are made as custom attributes on your class declaration.

The attributes are only meaningful when you use the class in conjunction with LINQ to SQL. They are similar to the XML serialization attributes in the . These "data" attributes provide LINQ to SQL with enough information to translate queries for your objects into SQL queries against the database and changes to your objects into SQL insert, update, and delete commands.

It is also possible to represent the mapping information by using an XML mapping file instead of attributes. This scenario is described in more detail in the External Mapping section. The Database attribute is used to specify the default name of database if it is not supplied by the connection.

Database attributes can be applied to strongly typed DataContext declarations. This attribute is optional. The Table attribute is used to designate a class as an entity class associated with a database table.

Classes with the Table attribute will be treated specially by LINQ to SQL. The Column attribute is used to designate a member of an entity class that represents a column in a database table. It can be applied to any field or property, public, private or internal.

Only members identified as columns are persisted when LINQ to SQL saves changes to the database. A typical entity class will use Column attributes on public properties and store actual values in private fields. The DBType is only specified so that the CreateDatabase() method can construct the table with the most precise type.

Otherwise, the knowledge that the underlying column is limited to 15 characters is unused. Members representing the primary key of a database type will often be associated with auto-generated values. If you do specify the DBType, make sure to include the IDENTITY modifier.

LINQ to SQL will not augment a custom specified DBType. However, if the DBType is left unspecified LINQ to SQL will infer that the IDENTITY modifier is needed when creating the Database via the CreateDatabase() method. Likewise, if the IsVersion property is true, the DBType must specify the correct modifiers to designate a version number or timestamp column.

If no DBType is specified, LINQ to SQL will infer the correct modifiers. You can control access to a member associated with an auto-generated column, version stamp, or any column you might want to hide by designating the access level of the member, or even limiting the accessor itself. The Order's CustomerID property can be made read-only by not defining a set accessor.

LINQ to SQL can still get and set the underlying value through the storage member. You can also make a member completely inaccessible to the rest of the application by placing a Column attribute on a private member. This allows the entity class to contain information relevant to the class's business logic without exposing it in general.

Even though private members are part of the translated data, since they are private you cannot refer to them in a language-integrated query. By default, all members are used to perform optimistic concurrency conflict detection. You can control whether a particular member is used by specifying its UpdateCheck value.

The following table shows the permissible mappings between database types and the corresponding CLR type. Use this table as a guide when determine which CLR type to use to represent a particular database column. The Association attribute is used to designate a property that represents a database association like a foreign-key to primary-key relationship.

Association properties either represent a single reference to another entity class instance or they represent a collection of references. Singleton references must be encoded in the entity class using the EntityRef (EntityRef (OfT) in Visual Basic) value type to store the actual reference. The EntityRef type is how LINQ to SQL enables deferred loading of references.

The public property is typed as Customer, not EntityRef. It is important not to expose the EntityRef type as part of the public API, as references to this type in a query will not be translated to SQL. Likewise, an association property representing a collection must use the EntitySet (EntitySet(OfT) in Visual Basic) collection type to store the relationship.

However, since an EntitySet (EntitySet(OfT) in Visual Basic) is a collection, it is valid to use the EntitySet as the return type. It is also valid to disguise the true type of the collection, using the ICollection (ICollection(OfT) in Visual Basic) interface instead. Make certain to use the Assign() method on the EntitySet if you expose a public setter for the property.

This allows the entity class to keep using the same collection instance since it may already be tied into the change tracking service. This attribute specifies an element type of an enumerable sequence that can be returned from a function that has been declared to return the IMultipleResults interface. This attribute can be specified more than once.

The StoredProcedure attribute is used to declare that a call to a method defined on the DataContext or Schema type is translated as a call to a database stored procedure. The Function attribute is used to declare that a call to a method defined on a DataContext or Schema is translated as a call to a database user-defined scalar or table-valued function. The Parameter attribute is used to declare a mapping between a method and the parameters of a database stored procedure or user-defined function.

The InheritanceMapping attribute is used to describe the correspondence between a particular discriminator codes and an inheritance subtype. All InheritanceMapping attributes used for an inheritance hierarchy must be declared on the root type of the hierarchy. A graph is a general term for a data structure of objects all referring to each other by references.

A hierarchy (or tree) is a degenerate form of graph. Domain-specific object models often describe a network of references that are best described as a graph of objects. The health of your object graph is vitally important to the stability of your application.

That's why is important to make sure references within the graph remain consistent to your business rules and/or constraints defined in the database. LINQ to SQL does not automatically manage consistency of relationship references for you. When relationships are bidirectional a change to one side of the relationship should automatically update the other.

Note that it is uncommon for normal objects to behave this way so it is unlikely that you would have designed your objects this way otherwise. LINQ to SQL does provide a few mechanisms to make this work easy and a pattern for you to follow to make sure you are managing your references correctly. Entity classes generated by the code generation tool will automatically implement the correct patterns.

The EntitySet (EntitySet(OfT) in Visual Basic) type has a constructor that allows you to supply two delegates to be used as callbacks; the first when an item is added to the collection, the second when it is removed. As you can see from the example, the code you specify for these delegates can and should be written to update the reverse relationship property. This is how the Customer property on an Order instance is automatically changed when an order is added to a customer's Orders collection.

Implementing the relationship on the other end is not as easy. The EntityRef (EntityRef(OfT) in Visual Basic) is a value type defined to contain as little additional overhead from the actual object reference as possible. It has no room for a pair of delegates.

Instead, the code managing graph consistency of singleton references should be embedded in the property accessors themselves. Take a look at the setter. When the Customer property is being changed the order instance is first removed from the current customer's Orders collection and then only later added to the new customer's collection.

Notice that before the call to Remove() is made the actual entity reference is set to null. This is done to avoid recursion when the Remove() method is called. Remember, the EntitySet will use callback delegates to assign this object's Customer property to null.

The same thing happens right before the call to Add(). The actual entity reference is updated to the new value. This will again curtail any potential recursion and of course accomplish the task of the setter in the first place.

The definition of a one-to-one relationship is very similar to the definition of a one-to-many relationship from the side of the singleton reference. Instead of Add() and Remove() being called, a new object is assigned or a null is assigned to sever the relationship. Again, it is vital that relationship properties maintain the consistency of the object graph.

If the in-memory object graph is inconsistent with the database data, then a run-time exception is generated when the SubmitChanges method is called. Consider using the code generation tool to maintain consistency work for you. Your objects may participate in the change tracking process.

It is not required that they do, but they can considerably reduce the amount of overhead needed to keep track of potential object changes. It is likely that your application will retrieve many more objects from queries than will end up being modified. Without proactive help from your objects, the change tracking service is limited in how it can actually track changes.

Since there is no true interception service in the runtime, the formal tracking does not actually occur. Instead, duplicate copies of the objects are stored when they are first retrieved. Later, when you call SubmitChanges(), these copies are used to compare against the ones you've been given.

If their values differ, then the object has been modified. This means that every object requires two copies in memory even if you never change them. A better solution is to have the objects themselves announce to the change tracking service when they are indeed changed.

This can be accomplished by having the object implement an interface that exposes a callback event. The change tracking service can then wire up each object and receive notifications when they change. To assist in improved change tracking, your entity classes must implement the INotifyPropertyChanging interface.

It only requires you to define an event called PropertyChanging—the change tracking service then registers with your event when your objects come into its possession. All you are required to do is raise this event immediately before you are about to change a property's value. Don't forget to put the same event raising logic in your relationship property setters, too.

For EntitySets, raise the events in the delegates you supply. LINQ to SQL supports single-table mapping, whereby an entire inheritance hierarchy is stored in a single database table. The table contains the flattened union of all possible data columns for the whole hierarchy and each row has nulls in the columns that are not applicable to the type of the instance represented by the row.

The single-table mapping strategy is the simplest representation of inheritance and provides good performance characteristics for many different categories of queries. The Table ( in Visual Basic) attribute. An InheritanceMapping ( in Visual Basic) attribute for each class in the hierarchy structure.

For non-abstract classes, this attribute must define a Code property (a value that appears in the database table in the Inheritance Discriminator column to indicate which class or subclass this row of data belongs to) and a Type property (which specifies which class or subclass the key value signifies). An IsDefault property on a single InheritanceMapping ( in Visual Basic) attribute. This property serves to designate a "fallback" mapping in case the discriminator value from the database table does not match any of the Code values in the inheritance mappings.

An IsDiscriminator property for a Column ( in Visual Basic) attribute, to signify that this is the column that holds the Code value for inheritance mapping. No special attributes or properties are required on the subclasses. Note especially that subclasses do not have the Table ( in Visual Basic) attribute.

In the following example, data contained in the Car and Truck subclasses are mapped to the single database table Vehicle. Note that the types of the columns that represent fields in the subtypes have to be nullable or they need to have a default specified. This is necessary for the insert commands to be successful.

You can expand a hierarchy far beyond the simple sample already provided. Since entity classes have attributes describing the structure of the relational database tables and columns, it is possible to use this information to create new instances of your database. You can call the CreateDatabase() method on the DataContext to have LINQ to SQL construct a new database instance with a structure defined by your objects.

There are many reasons you might want to do this: you might be building an application that automatically installs itself on a customer system, or a client application that needs a local database to save its offline state. For these scenarios, the CreateDatabase() is ideal—especially if a known data provider like SQL Server Express 2005 is available. However, the data attributes may not encode everything about an existing database structure.

The contents of user-defined functions, stored procedures, triggers, and check constraints are not represented by the attributes. The CreateDatabase() function will only create a replica of the database using the information it knows, which is the structure of the database and the types of columns in each table. Yet, for a variety of databases this is sufficient.

Below is an example of how you can create a new database named MyDVDs. LINQ to SQL also provides an API to drop an existing database prior to creating a new one. The database creation code above can be modified to first check for an existing version of the database using DatabaseExists() and then drop it using DeleteDatabase().

After the call to CreateDatabase(), the new database is able to accept queries and commands like SubmitChanges() to add objects to the MDF file. It is also possible to use CreateDatabase() with a SKU other than SQL Server Express, using either an MDF file or just a catalog name. It all depends on what you use for your connection string.

The information in the connection string is used to define the database that will exist, not necessarily one that already exists. LINQ to SQL will fish out the relevant bits of information and use it to determine what database to create and on what server to create it. Of course, you will need database admin rights or equivalent on the server to do so.

LINQ to SQL is part of the ADO.NET family of technologies. It is based on services provided by the ADO.NET provider model, so it is possible to mix LINQ to SQL code with existing ADO.NET applications. When you create a LINQ to SQL DataContext, you can supply it with an existing ADO.NET connection.

All operations against the DataContext—including queries—will use the connection you provided. If the connection was already opened LINQ to SQL will honor your authority over the connection and leave it as is when finished with it. Normally LINQ to SQL closes its connection as soon as an operation is finished unless a transaction is in scope.

You can always access the connection used by your DataContext through the Connection property and close it yourself. You can also supply the DataContext with your own database transaction, in case your application has already initiated one and you desire the DataContext to play along with it. Whenever a Transaction is set, the DataContext will use it whenever it issues a query or executes a command.

Don't forget to assign the property back to null when you are done. However, the preferred method of doing transactions with the . NET Framework is to use the TransactionScope object.

It allows you to make distributed transactions that work across databases and other memory resident resource managers. The idea is that transaction scopes start cheap, only promoting themselves to full on distributed transaction when they actually do refer to multiple databases or multiple connections within the scope of the transaction. Connections and transactions are not the only way you can interoperate with ADO.NET.

You might find that in some cases the query or submit changes facility of the DataContext is insufficient for the specialized task you may want to perform. In these circumstances it is possible to use the DataContext to issue raw SQL commands directly to the database. The ExecuteQuery() method lets you execute a raw SQL query and converts the result of your query directly into objects.

For example, assuming that the data for the Customer class is spread over two tables customer1 and customer2, the following query returns a sequence of Customer objects. As long as the column names in the tabular results match column properties of your entity class LINQ to SQL will materialize your objects out of any SQL query. The ExecuteQuery() method also allows parameters.

The parameters are expressed in the query text using the same curly notation used by Console.WriteLine() and String.Format(). In fact, String.Format() is actually called on the query string you provide, substituting the curly braced parameters with generated parameter names like @p0, @p1 ..., @p(n). A change conflict occurs when the client attempts to submit changes to an object and one or more values used in the update check have been updated in the database since the client last read them.

Note   Only members mapped as UpdateCheck. Always or UpdateCheck. WhenChanged participate in optimistic concurrency checks.

No check is performed for members marked UpdateCheck.Never. Resolution of this conflict includes discovering which members of the object are in conflict, and then deciding what to do about it. Note that optimistic concurrency might not be the best strategy in your particular situation.

Sometimes it is perfectly reasonable to "let the last update win". Conflict resolution is the process of refreshing a conflicting item by querying the database again and reconciling any differences. When an object is refreshed, the change tracker has the old original values and the new database values.

LINQ to SQL then determines whether the object is in conflict or not. If it is, LINQ to SQL determines which members are involved. If the new database value for a member is different from the old original (which was used for the update check that failed), this is a conflict.

Any member conflicts are added to a conflict list. For example, in the following scenario, User1 begins to prepare an update by querying the database for a row. Before User1 can submit the changes, User2 has c

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions