Managed Extensibility Framework
25 April 08 09:45 AM | kcwalina | 40 Comments   

Several months ago we formed what we call Application Framework Core team. The charter of the team is to play the same role in the application frameworks space (WinForms, ASP.NET, WPF, Silverlight) as the Base Class Libraries (BCL) team plays at the bottom of the platform stack.

The BCL team did a good job fulfilling the role of the team responsible for decreasing duplication and providing common abstractions for the low levels of the platform. Unfortunately, we did not have a similar team really focused on these sets of issues higher up on the stack. This resulted in some unfortunate duplication (like several data binding models for each of the application models, different dependency property system for WPF and WF) and lack of common abstractions (what undo APIs should my generic application plugin call?) for application model code. The Application Framework Core team is now in place to start addressing the problems.

One of the first concrete projects that we are working on and are ready to slowly talk about is what we call the Managed Extensibility Framework (MEF). We observed that there are more and more places in the .NET Framework itself and increasingly managed applications (like Visual Studio) where we want to provide, or already provide, hooks for 3rd party extensions. Think about TraceListener plugins for the TraceSource APIs, pluggable rules for Visual Studio Code Analysis (and the standalone FxCop), etc. In the absence of a built-in extensibility framework (like MEF), our developers who want to enable such extensions often are forced to create custom mechanisms, thus duplication. We hope that MEF will both stop such duplication and encourage/enable more extensibility in the Framework and applications built on top of it.

We will blog more details about MEF in the upcoming months, but here are some early details (subject to changes, of course): MEF is a set of features referred in the academic community and in the industry as a Naming and Activation Service (returns an object given a “name”), Dependency Injection (DI) framework, and a Structural Type System (duck typing). These technologies (and other like System.AddIn) together are intended to enable the world of what we call Open and Dynamic Applications, i.e. make it easier and cheaper to build extensible applications and extensions.

The work we are doing builds on several existing Microsoft technologies (like the Unity framework) and with feedback from the DI community. The relationship with the Unity team is the regular relationship between the P&P group and the .NET Framework group where we trickle successful technologies and ideas from the P&P team into the .NET Framework after they have passed the test of time. We have done this with some features in the diagnostics, exceptions, and UI space in the past. The direct engagement with the DI community is also starting. We gave a talk on the technology at last week’s MVP Summit, and talked with Jeremy Miller (the owner of Structure Map) and Ayende Rahien (Rhino Mocks) . We got lots of great feedback from Jeremy and Ayende and I think their experience in the DI space and their feedback will be invaluable as the project evolves. Thanks guys! We are of course also looking forward to engaging others in the DI community.

And finally here is some code showing basic scenarios our framework supports:

Creating an Extension Point in an Application:

public class HelloWorld {

 

  [Import] // import declares what a component needs

  public OutputDevice Output;

 

   public void SayIt() {

        Output.WriteLine("Hello World");

  }

}

 

// Extension Contract

public abstract class OutputDevice {

  void WriteLine(string output)

}

1.       Creating an Extension

[Export(typeof(OutputDevice))] // export declared what a component gives

public class CustomOutput : OutputDevice {

  public void WriteLine(string output) {

    Console.WriteLine(output);

  }

}

 

2.       Magic that makes composes (DIs) the application with the extensions.

var domain = new ComponentDomain();

var hello = new HelloWorld();

 

// of course this can be implicit

domain.AddComponent(hello);

domain.AddComponent(new CustomOutput());

 

domain.Bind(); // bind matches the needs to gives

hello.SayIt();

Expecting lots of questions, I will preemptively answer (J): we don’t yet know whether or when we will ship this. We do have working code and we are looking into releasing a preview/CTP of the technology. For now we would be very interested in high level feedback. What do you think hinders extensibility in frameworks and application? Where would you like the Framework to be more extensible? What DI framework features you need, like, want, and use on daily basis? i.e. is constructor injection required?

And lastly, we are hiring! :-)

Filed under:
Framework Design Guidelines Digest v2
09 April 08 02:19 PM | kcwalina | 8 Comments   

Almost 4 years ago, I blogged about Framework Design Guidelines Digest. At that time, my blog engine did not support attaching files and I did not have a convenient online storage to put the document on, and so I asked people to email me if they want an offline copy.

Believe it or not, I still receive 1-2 emails a week with requests for the offline copy. Now that I have a convenient way to put the document online, and the fact that I wanted to make some small updates, I would like to repost the digest. The abstract is below and the full document can be downloaded here.

This document is a distillation and a simplification of the most basic guidelines described in detail in a book titled Framework Design Guidelines by Krzysztof Cwalina and Brad Abrams. Framework Design Guidelines were created in the early days of .NET Framework development. They started as a small set of naming and design conventions but have been enhanced, scrutinized, and refined to a point where they are generally considered the canonical way to design frameworks at Microsoft. They carry the experience and cumulative wisdom of thousands of developer hours over several versions of the .NET Framework.

 

Framework Design Studio Released
04 April 08 11:10 AM | kcwalina | 17 Comments   

When I was coming back from Mix 2007, I was bored on the plane and so started to write a dev tool. What a geeky thing to do on a plane. :-)

The tool allows comparing two versions of an assembly to identify API differences: API additions and removals. Comparing versions of APIs comes very handy during API design process. Often you want to ensure that things did not get removed accidentally (which can cause incompatibilities), and as APIs grow, you want to review the addition without having to re-review APIs that were already reviewed. The tool, called Framework Design Studio (FDS) supports these scenarios.

Later on, I got lots of help from Hongping Lim (a developer on our team), and David Fowler (our 2007 summer intern). David ported the application to WPF, and Hongping basically took it from an early prototype stage to what it is today and made it possible to ship it externally.

Anyway, you can get the tool at the MSDN Code Gallery, the user guide is attached to this post, and lastly, here is the API diff output that the tool generates. Hope you find it useful.

FDS

Simulated Covariance for .NET Generics
02 April 08 08:00 AM | kcwalina | 17 Comments   

I just wrote this pattern, but I am not sure if I should add it officially to the Framework Design Guidelines. It seems like a bit of a corner case scenario, though I do get questions about it from time to time. Anyway, let me know what you think. 

 

Different constructed types don’t have a common root type. For example, there would not be a common representation of IEnumerable<string> and IEnumerable<object> if not for a pattern implemented by IEnumerable<T> called Simulated Covariance. This post describes the details of the pattern.

Generics is a very powerful type system feature added to the .NET Framework 2.0. It allows creation of so called parameterized types. For example, List<T> is such a type and it represents a list of objects of type T. The T is specified at the time when the instance of the list is created.

 

List<string> names = new List<string>();

names.Add(“John Smith”);

names.Add(“Mary Johnson”);

 

Such Generic data structures have many benefits over their non-Generic counterparts. But they also have some, sometimes surprising, limitations. For example, some users expect that a List<string> can be cast to List<object>, just as a String can be cast to Object. But unfortunately, the following code won’t even compile.

 

List<string> names = new List<string>();

List<object> objects = names; // this won’t compile

 

There is a very good reason for this limitation, and that is to allow for full strong typing. For example, if you could cast List<string> to a List<object> the following incorrect code would compile, but the program would fail at runtime.

 

static void Main(){

List<string> names = new List<string>();

 

// this of course does not compile, but if it did

// the whole program would compile, but would be incorrect as it

// attempts to add arbitrary objects to a list of strings.

AddObjects((List<object>)names);

 

string name = names[0]; // how could this work?

}

 

// this would (and does) compile just fine.

static void AddObjects(List<object> list){

   list.Add(new object()); // it’s a list of strings, really. Should we throw?

   list.Add(new Button());

}

 

Unfortunately this limitation can also be undesired in some scenarios. For example, there is nothing wrong with casting a List<string> to IEnumerable<object>, like in the following example.

 

List<string> names = new List<string>();

IEnumerable<object> objects = names; // this won’t compile

foreach(object obj in objects){

   Console.WriteLine(obj.ToString());

}

 

In general, having a way to represent “any list” (or in general “any instance of this generic type”) is very useful.

 

// what type should ??? be?

static void PrintItems(??? anyList){

   foreach(object obj in anyList){

          Console.WriteLine(obj.ToString());

   }

}

 

Unfortunately, unless List<T> implemented a pattern that will be described in a moment, the only common representation of all List<T> instances would be System.Object. But System.Object is too limiting and would not allow PrintItems method to enumerate items in the list.

The reason that casting to IEnumerable<object> is just fine, but casting to List<object> can cause all sorts of problems is that in case of IEnumerable<object>, the object appears only in the output position (the return type of GetEnumerator is IEnumerator<object>). In case of List<object>, the object represents both output and input types. For example, object is the type of the input to the Add method.

// T does not appear as input to any members or dependencies of this interface

public interface IEnumerable<T> {

   IEnumerator<T> GetEnumerator();         

}

public interface IEnumerator<T> {

   T Current { get; }

   bool MoveNext();

}

 

// T does appear as input to members of List<T>

public class List<T> {

   public void Add(T item); // T is an input here

   public T this[int index]{

       get;

       set; // T is actually an input here

}

}

 

In other words, we say that in IEnumerable<T>, the T is at covariant positions (outputs). In List<T>, the T is at covariant and contravariant (inputs) positions.

 

To solve the problem of not having a common type representing the root of all constructions of a generic type, you can implement what’s called the Simulated Covariance Pattern.

 

Given a generic type (class or interface) and its dependencies

 

public class Foo<T> {

   public T Property1 { get; }

   public T Property2 { set; }

   public T Property3 { get; set; }

   public void Method1(T arg1);

public T Method2();

   public T Method3(T arg);

   public Type1<T> GetMethod1();

public Type2<T> GetMethod2();

}

public class Type1<T> {

   public T Property { get; }

}

public class Type2<T> {

   public T Property { get; set; }

}

 

Create a new interface (Root Type) with all members containing Ts at contravariant positions removed. In addition, feel free to remove all members that might not make sense in the context of the trimmed down type.

 

public interface IFoo<T> {

    T Property1 { get; }

    T Property3 { get; } // setter removed

    T Method2();

    Type1<T> GetMethod1();

    IType2<T> GetMethod2(); // note that the return type changed

}

public interface IType2<T> {

    T Property { get; } // setter removed

}

 

The generic type should then implement the interface explicitly and “add back” the strongly typed members (using T instead of object) to its public API surface.

 

public class Foo<T> : IFoo<object> {

    public T Property1 { get; }

    public T Property2 { set; }

    public T Property3 { get; set;}

    public void Method1(T arg1);

    public T Method2();

    public T Method3(T arg);

    public Type1<T> GetMethod1();

    public Type2<T> GetMethod2();

 

    object IFoo<object>.Property1 { get; }

    object IFoo<object>.Property3 { get; }

    object IFoo<object>.Method2() { return null; }

    Type1<object> IFoo<object>.GetMethod1();

    IType2<object> IFoo<object>.GetMethod2();

}

 

public class Type2<T> : IType2<object> {

    public T Property { get; set; }

    object IType2<object>.Property { get; }

}

 

Now, all constructed instantiation of Foo<T> have a common root type IFoo<object>.

 

var foos = new List<IFoo<object>>();

foos.Add(new Foo<int>());

foos.Add(new Foo<string>());

foreach(IFoo<object> foo in foos){

   Console.WriteLine(foo.Property1);

   Console.WriteLine(foo.GetMethod2().Property);

}

 

þ CONSIDER using the Simulated Covariance Pattern, if there is a need to have a representation for all instantiations of a generic type.

The pattern should not be used frivolously as it results in additional types in the library and can makes the existing types more complex.

 

þ DO ensure that the implementation of the root’s members is equivalent to the implementation of the corresponding generic type members.

There should not be an observable difference between calling a member on the root type and calling the corresponding member on the generic type. In many cases the members of the root are implemented by calling members on the generic type.  

public class Foo<T> : IFoo<object> {

   

   public T Property3 { get { ... } set { ... } }

   object IFoo<object>.Property3 { get { return Property3; } }

}

 

þ CONSIDER using an abstract class instead of an interface to represent the root.

This might sometimes be a better option as interfaces are more difficult to evolve (see section X). On the other hand there are some problem with using abstract classes for the root. Abstract class members cannot be implemented explicitly and the subtypes need to use the new modifier. This makes it tricky to implement the root’s members by delegating to the generic type members.

 

þ CONSIDER using a non-generic root type, if such type is already available.

For example, List<T> implements IEnumerable for the purpose of simulating covariance.

 

Job Openings on the .NET Framework Core Team
14 March 08 12:49 PM | kcwalina | 14 Comments   

 

We have been incubating ideas about building a simple extensibility framework for some time. Now, as plans for the next version of the .NET Framework crystallize a bit more, we decided to productize the project. As a result, we have opened a job position (and most probably will be opening more) on the .NET Framework team. If you are interested, please see details here and send me an email at “kcwalina at microsoft.com.”

 

So, what is this extensibility framework? Initially, it will be a low level core .NET Framework feature to make it easy for applications to expose extensibility points and consume extensions. Think about what for example FxCop has to do define rule contracts and load rule implemented by the community. These are the basics, and we can talk about the broader and longer term vision when you come to Redmond for an interview :-)

 

This is a technical Program Manager position in Redmond, WA, and it’s basically exactly the job I did when I joined Microsoft. Besides working on the Framework features, all Program Managers on the core team have opportunities to work on API design and architecture projects.

LINQ Design Guidelines
12 March 08 09:13 PM | kcwalina | 5 Comments   

Mircea, a program manager on my team, has worked on development of design guidelines for LINQ related features. The guidelines were reviewed internally and are now available on Mitch’s blog. We might still iterate on them a bit, but quite soon I plan to incorporate them into Framework Design Guidelines manuscript, so if you have feedback, it’s time to send it our way! :-)

Thanks!

Filed under:
Video Recording of "Framework Engineering: Architecting, Designing, and Developing Reusable Libraries"
08 January 08 10:42 AM | kcwalina | 6 Comments   

I just received a video recording of a talk I did at the last TechEd. You can find the abstract below, and the WMV file can be downloaded from here. Hope you find it useful.

[UPDATE: I attched the slides in xps format. The ppt file is 10x larger]

Framework Engineering: Architecting, Designing, and Developing Reusable Libraries  

This session covers the main aspects of reusable library design: API design, architecture, and general framework engineering processes. Well-designed APIs are critical to the success of reusable libraries, but there are other aspects of framework development that are equally important, yet not widely covered in literature. Organizations creating reusable libraries often struggle with the process of managing dependencies, compatibility, and other design processes so critical to the success of modern frameworks. Come to this session and learn about how Microsoft creates its frameworks. The session is based on experiences from the development of the .NET Framework and Silverlight, and will cover processes Microsoft uses in the development of managed frameworks.  

 

Framework Design Guidelines 2nd Edition
03 January 08 02:39 PM | kcwalina | 30 Comments   

My blog was relatively silent for several weeks. First, I was traveling to Europe for the TechEd, then was busy at work, then the holiday break. It's time to go back to more regular posting.

I will start with an announcement (or at least a more formal and broader announcement): after my TechEd presentation, the first question I got from the audience was whether we are working on a new edition of the Framework Design Guidelines. The answer is "yes", which I am super excited about. Right before the conference, we signed formal documents with the publisher and started wortking on the book. It's going to cover the new features in the .NET Framework 3.0, 3.5, and new advances in languages (e.g. LINQ) that are relevant to Framework design. BTW, I would appreciate feedback on what specifically you would like to get covered or clarified.

We are shooting to have the book ready around the end of 2008, but the publisher already sent me a draft of the cover art:

What Do Swimmers Have to Say About Framework Design?
04 October 07 05:09 PM | kcwalina | 7 Comments   

I am starting to feel pressure to finish up slides for my presentation at the upcoming TechEd in Barcelona. I will be talking about framework architecture and design. Here is the abstract I took from the conference’s site:

WIN304 Framework Engineering: Architecting, Designing, and Developing Reusable Libraries  

This session covers the main aspects of reusable library design: API design, architecture, and general framework engineering processes. Well-designed APIs are critical to the success of reusable libraries, but there are other aspects of framework development that are equally important, yet not widely covered in literature. Organizations creating reusable libraries often struggle with the process of managing dependencies, compatibility, and other design processes so critical to the success of modern frameworks. Come to this session and learn about how Microsoft creates its frameworks. The session is based on experiences from the development of the .NET Framework and Silverlight, and will cover processes Microsoft uses in the development of managed frameworks.  

I am super excited about the talk, which I want to make into a continuation to the API design pre-con I did with Brad Abrams at the last PDC (BTW, the same content is available here). The PDC talk was about designing the API surface, but there is so much more to framework design!

I am also excited coming back to the wonderful city of Barcelona. Last time I was there, believe it or not, I was on the Polish swimming team competing in the Olympics (see #13). I hope this trip will bring back some memories from the other and completely different “career” I had in the past.

FxCop Rule for Multi-Targeting
02 October 07 04:50 PM | kcwalina | 18 Comments   

Two months ago, Scott blogged about the multi-targeting support in Visual Studio 2008. I worked on this feature in the planning phase (read “long time ago”), and so I am quite thrilled to see it finally in the hands of developers. Especially, that several years ago I remember our small working group sitting in a room and wondering whether such feature was even possible. The complexities of implementing it in a large project like the Visual Studio seemed quite daunting.

The thing that made implementing multi-targeting in one release possible was the concept of Red and Green bits.  You can read about the concept here, here, and here, but quickly: red bits are Framework assemblies that existed in the .NET Framework 2.0 and were serviced in versions 3.0 and 3.5. Green bits are assemblies that were added either in the version 3.0 or 3.5. The servicing changes in Red bit APIs are limited (after all it is servicing) to a very small number of API additions and bug fixes.

 We leveraged (and influenced) the decision to limit the number of changes to existing assemblies to drastically simplify the requirements for the multi-targeting system. That is, we made an assumption that the majority of differences between the Framework versions (targets) are on assembly boundaries. 

But now I have to confess, there are some limitations in this design that we accepted when we made the original simplifying assumption. There is a very limited number of APIs being added to the Red assemblies and the multi-targeting system is currently not able to detect these.  For example, if you set the target to the .NET Framework 2.0 and read the newly-added (albeit obscure) property GCSettings.LatencyMode the program will compile just fine but then fail to run on the .NET Framework 2.0 RTM. The reason is that the property was added to an existing (Red) class GCSettings, not to a new class in a new assembly. Despite that the number of such additional APIs in Red bits is very small (and that we recommend that you still test your programs on all targeted platforms), this can be quite annoying. And so feeling a bit responsible for this (and trying to promote FxCop J), I wrote an FxCop rule that can analyze IL of an assembly targeted at Framework 2.0 and warn you about all calls to members that are not present in 2.0 RTM.

Here is how it works: The test program below uses new LatencyMode enum and calls a new GcSettings.LatencyMode property. As I mentioned above, these APIs don’t exist in .NET Framework 2.0 RTM. Even if I built this project with the multi-targeting target set to Framework 2.0, the system would not complain about calling these APIs. But, as you can see in the error list, the FxCop analysis engine can deal even with this difficult to detect problem. You can think of the rule as a very smart (post) compiler-step.

 

A sample project with the rule is attached to this post. I provide it without any guarantees, as is, and in fact I am sure it has many bugs and problems. When I find time, I will work on it more to polish it a bit but I don't promise (i.e. treat it as a sample).

BTW, to install the rule, just build it and drop it to the FxCop rules directory. On my machine it is at C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\Rules. In addition you have to go to Project settings of the project you want to analyze and turn on FxCop analysis (the “Code Analysis” tab in the project properties). Lastly, you can either rebuild or right click on a project and choose “Run Code Analysis”. I hope it’s helpful.

Filed under:
Attachment(s): MultitargettingRules.zip
China Trip
02 October 07 11:42 AM | kcwalina | 1 Comments   

Sorry for not blogging for such a long time. First I was on a combined business/vacations trip to China and when I came back, I got involved in some intensive planning for the future releases of the platform. I will try to post something about software in the next couple of days.

The trip to China was great. First day I taught a class on API design at the Microsoft center in Beijing and then my wife and I stayed for several days doing sightseeing, drinking the best tea ever, and indulging ourselves in great food (complete photo documentary is here).

 

Filed under:
Duck Notation
18 July 07 11:20 AM | kcwalina | 34 Comments   

I have been working with the C# and VB teams on design guidelines for LINQ. We started to talk about the so called Query Pattern, which describes what you need to do if you want a custom type to support the new query operators (select, where, group by, etc.).

The query operators use duck typing to determine whether they can operate on a type or not. This means that instead of implementing an interface (static typing) a queryable type will need to have a set of members that follow a specified set of conventions (naming, parameter and return types, etc).

For example, the C#’s foreach operator already uses duck typing. This might be surprising to some, but to support foreach in C# you don’t need to implement IEnumerable! All you have to do is:

Provide a public method GetEnumerator that takes no parameters and returns a type that has two members: a) a method MoveMext that takes no parameters and return a Boolean, and b) a property Current with a getter that returns an Object.

For example, the following type supports foreach:

 

class Foo {

    public Bar GetEnumerator() { return new Bar(); }

 

    public struct Bar {

        public bool MoveNext() {

            return false;

        }

        public object Current {

            get { return null; }

        }

    }

}

 

// the following complies just fine:

Foo f = new Foo();

foreach (object o in f) {

    Console.WriteLine(“Hi!”);

}

But, as you can see in the yellow highlight above, the describing the foreach pattern in English (or any other spoken language) is quite difficult and not very readable, especially if you contrast it with the simplicity specifying requirements based on static typing:

Implement IEnumerable.

… and having IEnumerable defined as:

 

public interface IEnumerable {

    public IEnumerator GetEnumerator();

}

public interface IEnumerator {

    public bool MoveMext();

    public object Current { get; }

}

The english description gets much worse for something like the query pattern, which is way more complex than the foreach pattern. Because of that, I was thinking that there must be a better way to specify such patterns based on duck typing. But when I searched the web, to my surprise, I could not find any simple notations to do that. If you know of any, please let me know.

In the absence of an existing notation, I started to think about something like the following:

 

[Foreachable] {

    public [Enumerator] GetEnumerator();

}

[Enumerator] {

    public bool MoveMext();

    public [ItemType] Current { get; }

}

This seems much easier to parse than the English description of the pattern. What do you think?

How to Fight Complexity in Software (part I)
17 July 07 04:02 PM | kcwalina | 0 Comments   

A couple of weeks ago, Grady Booch gave a lecture at Microsoft. It was a pleasure to hear of my software engineering heroes in person.  Grady talked about “the promise, the limits, and the beauty of software.”  

The main thing that captured my interest during the lecture was a discussion about the cost of complexity in software.

Grady said that there are many factors influencing the cost of software, but two of these factors have disproportionally high impact on the overall cost: a) the processes used to develop and maintain the software, b) the complexity of the codebase.  To fight growing complexity of the codebase, many companies allocate some percentage of their resources for refactoring, cleanup, and other activities that don’t directly result in features for the end user of the software. This really resonated with me. I am a huge fan of this approach. We do some of it here in the developer division (read about the MQ milestones here, here, and here), but I wish we and software industry in general did even more.

Filed under:
FxCop Designers Honored with the Chairman's Award
03 July 07 04:21 PM | kcwalina | 8 Comments   

Last week, during the annual Engineering Excellence week, several Microsoft engineers and managers involved in development of engineering tools and practices were presented with Engineering Excellence Awards. In addition, the principal designers of three static analysis tools were honored with so called “Engineering Excellence Chairman’s Award,” which is given for contributions that our chairman (Bill) considers especially important. The Chairman award was given twice in the history of Microsoft.

Mike Fanning, Brad Abrams, and I received nice glass statuettes and a framed letters from Bill and Jon (our VP responsible for EE). See pictures below. The letter says “The Engineering Excellence Chairman’s Award is the Microsoft’s highest award for engineering group employees worldwide.”

It’s extremely rewarding to receive something like that for what started as a hobby project on which we worked mainly in our free time (at least, before it was officially productized). I would like to thank Brad for being the “let’s do it now” person, pushing the whole effort of design guidelines and fxcop development, and Mike for his passion for quality and being the super-developer on the team. 

Of course, many others contributed to the success of fxcop. Once we gather all the names, we will post a new entry or update this one to thank everybody and record all the contributions for posterity. [UPDATE: Brad just posted on this here.] 

 

Generic Methods as Casts
07 June 07 10:56 AM | kcwalina | 3 Comments   

Somebody just asked me which of the following API design alternatives is better. They both do what we could call “casting.”

 

// Generic Method “Cast”

var foo = someInstance.SomeMethod<IFoo>();

 

// Simple Cast

var foo = (IFoo)someInstance.SomeMethod();

The Generic method approach can potentially have two benefits. A constraint can tell users what the subset of valid casts is. For example, the following API will allow casts to a collection (a subtype of IEnumerable):

 

public TCollection SomeMethod<TCollection>() where TCollection:IEnumerable {

   

}   

If the user calls this method and tries to “cast” the return value to let’s say an Int32, the compiler with complain. For example,

 

var int someInstance.SomeMethod<int>();

… will generate the following error:

 

The type 'int' must be convertible to 'System.Collections.IEnumerable' in order to use it as parameter 'TCollection' in the generic type or method 'SomeClass.SomeMethod<TCollection>()'

Secondly, the Generic method might do different things based on the type parameter. The method relying on a simple cast has now knowledge about the type the user will cast it’s return value to.

 

public TCollection SomeMethod< TCollection >()

    where TCollection:IEnumerable,new() {

   

    TCollection newCollection = new TCollection ();

    ArrayList newArray = newCollection as ArrayList;

    if (newArray != null) newArray.Capacity = 0;

    return newCollection;

}  

If neither of these benefits applies in your case, I would use a simple cast. It’s more transparent in terms of what’s going on. The users of a generic method might wonder what kind of “magic” is being done with the generic type parameter.

More Posts Next page »
Page view tracker