Eliminating Postbacks: Setting Up jQuery On ASP.NET Web Forms and Managing Data On The Client

by Jon Davis 19. October 2008 12:11

This is a follow-up to a prior post, Keys to Web 3.0 Design and Development When Using ASP.NET. Now I want to focus solely on getting jQuery and client-side data managmeent working with ASP.NET 2.0 without ASP.NET AJAX or ASP.NET MVC.

So you're stuck with Visual Studio 2005 and ASP.NET Web Forms. You want to flex your ninja skills. You can't jump into ASP.NET MVC or ASP.NET AJAX or an alternate templating solution like PHP, though. Are you going to die ([Y]/N)? N

 

Why would you use Web Forms in the first place? Well, you might want to take advantage of some of the data binding shorthand that can be done with Web Forms. For this blog entry, I'll use the example of a pre-populated DropDownList (a <select> tag filled with <option>'s that came from the database). 

This is going to be kind of a "for Dummies" post. Anyone who has good experience with ASP.NET and jQuery is likely already quite familiar with how to get jQuery up and running. But there are a few caveats that an ASP.NET developer would need to remember or else things become tricky (and again, no more tricky than is easily mastered by an expert ASP.NET developer).

Caveat #1: You cannot simply throw a script include into the head of an ASPX page.

The following page markup is invalid:

<%@ Page Language="C#" AutoEventWireup="true"  CodeFile="Default.aspx.cs" Inherits="_Default" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
    <script language="javascript" type="text/javascript" src="jQuery-1.2.6.js></script>
</head>
<body>

It's invalid because ASP.NET truncates tags from the <head> that it doesn't recognize and doesn't know how to deal with, such as <script runat="server">. You have to either put it into the <body> or register the script with the page which will then cause it to be put into body.

Registering the script with the page rather than putting it in the body yourself is recommended by Microsoft because:

  1. It allows you to guarantee the life cycle--more specifically the load order--of your scripts.
  2. It allows ASP.NET to do the same (manage the load order) of your scripts alongside the scripts on which ASP.NET Web Forms is running. Remember that Web Forms hijacks the <form> tag and the onclick behavior of ASP.NET buttons and such things, so it does know some Javascript already and needs to maintain that.
  3. When a sub-page or an .ascx control requires a dependency script, it helps to prevent the same dependency script from being added more than once.
  4. It allows controls to manage their own scripts. More on that in a moment.
  5. It allows you to put the inclusion markup into a server language context where you can use ResolveUrl("~/...") to dynamicize the location of the file according to the app path. This is very important in web sites where a directory hierarchy--with ASP.NET files buried inside a directory--is in place.

Here's how to add an existing external script (a script include) like jQuery into your page. Go to the code-behind file (or the C# section if you're not using code-behind) and register jQuery like so:

protected void Page_Load(object sender, EventArgs e)
{
    Page.ClientScript.RegisterClientScriptInclude(
        typeof(_YourPageClassName_), "jQuery", ResolveUrl("~/js/jquery-1.2.6.js"));
} 

A hair more verbose than I'd prefer but it's not awful. In the case of jQuery which is usually a foundational dependency for many other scripts (and itself has no dependencies), you might also consider putting this on an OnInit() override rather than Page_Load, but that's only if you're adding it to a control, where its lifecycle is less predictable in Page_Load() than in OnInit(), but I'll get into that shortly.

There is a way to inject a script into the <head>, such as described here: http://www.aspcode.net/Javascript-include-from-ASPNET-server-control.aspx. However, that is even more verbose, and it's not really considered "the ASP.NET Web Forms way".

If you want to use the Page.ClientScript registration methods for page script (written inline with markup), create a Literal control and put your script tag there. Then on the code-behind you can use Page.ClientScript.RegisterClientScriptBlock().

On the page:

<body>
    <form id="form1" runat="server">
    <asp:Literal runat="server" ID="ScriptLiteral" Visible="false">
    <script language="javascript" type="text/javascript">
        alert($);
    </script>
    </asp:Literal> 

Note that I'm using a hidden (Visible="false") Literal tag, and this tag is inside the <form runat="server"> tag. Which leads me to ..

Caveat #2: ASP.NET controls can only be declared inside <form runat="server">.

Alright, so then on the code-behind file (or server script), I add:

protected void Page_Load(object sender, EventArgs e)
{
    Page.ClientScript.RegisterClientScriptInclude(
        typeof(_YourPageClassName_), "jQuery", ResolveUrl("~/js/jquery-1.2.6.js"));
    Page.ClientScript.RegisterClientScriptBlock(
        typeof(_YourPageClassName_), "ScriptBlock", ScriptLiteral.Text, false);
} 

Unfortunately, ..

Caveat #3: Client script blocks that are registered on the page in server code lack Intellisense designers for script editing.

To my knowledge, there's no way around this, and believe me I've looked. This is a design error on Microsoft's part, it should not have been hard to create a special tag like: <asp:ClientScriptBlock runat="server">YOUR_SCRIPT_HERE();</asp:ClientScriptBlock>, that registers the given script during the Page_Load lifecycle, and then have a rich, syntax-highlighting, intellisense-supporting code editor when editing the contents of that control. They added a ScriptManager control that is, unfortunately, overkill in some ways, but that is only available in ASP.NET AJAX extensions, not core ASP.NET Web Forms.

But since they didn't give us this functionality in ASP.NET Web Forms, if you want natural script editing (and let's face it, we all do), you can just use unregistered <script> tags the old-fashioned way, but you should put the script blocks either inside the <form runat="server"> element and then inside a Literal control and registered, as demonstrated above, or else it should be below the </form> closure of the <form runat="server"> element. 

Tip: You can usually safely use plain HTML <script language="javascript" type="text/javascript">...</script> tags the old fashioned way, without registering them, as long you place them below your <form runat="server"> blocks, and you are acutely aware of dependency scripts that are or are not also registered.

But scripts that are used as dependency libraries for your page scripts, such as jQuery, should be registered. Now then. We can simplify this... 

Tip: Use an .ascx control to shift the hassle of script registration to the markup rather than the code-behind file.

A client-side developer shouldn't have to keep jumping to the code-behind file to add client-side code. That just doesn't make a lot of workflow sense. So here's a thought: componentize jQuery as a server-side control so that you can declare it on the page and then call it.

Controls/jQuery.ascx (complete):

<%@ Control Language="C#" AutoEventWireup="true" CodeFile="jQuery.ascx.cs" Inherits="Controls_jQuery" %> 

(Nothing, basically.)

Controls/jQuery.ascx.cs (complete):

using System;
using System.Web.UI; 

public partial class Controls_jQuery : System.Web.UI.UserControl
{
    protected override void OnInit(EventArgs e)
    {
        AddJQuery();
    } 

    private bool _Enabled = true;
    [PersistenceMode(PersistenceMode.Attribute)]
    public bool Enabled
    {
        get { return _Enabled; }
        set { _Enabled = value; }
    } 

    void AddJQuery()
    {
        string minified = Minified ? ".min" : "";
        string url = ResolveClientUrl(JSDirUrl 
            + "jQuery-" + _Version
            + minified 
            + ".js");
        Page.ClientScript.RegisterClientScriptInclude(
            typeof(Controls_jQuery), "jQuery", url);
    } 

    private string _jsDir = null;
    public string JSDirUrl
    {
        get
        {
            if (_jsDir == null)
            {
                if (Application["JSDir"] != null)
                    _jsDir = (string)Application["JSDir"];
                else return "~/js/"; // default
            }
            return _jsDir;
        }
        set { _jsDir = value; }
    } 

    private string _Version = "1.2.6";
    [PersistenceMode(PersistenceMode.Attribute)]
    public string JQueryVersion
    {
        get { return _Version; }
        set { _Version = value; }
    } 

    private bool _Minified = false;
    [PersistenceMode(PersistenceMode.Attribute)]
    public bool Minified
    {
        get { return _Minified; }
        set { _Minified = value; }
    }
}


Now with this control created we can remove the Page_Load() code we talked about earlier, and just declare this control directly.

(Add to top of page, just below <%@ Page .. %>:)

<%@ Page Language="C#" AutoEventWireup="true"  CodeFile="Default.aspx.cs" Inherits="_Default" %>
<%@ Register src="~/Controls/jQuery.ascx" TagPrefix="local" TagName="jQuery" %> 

(Add add just below <form runat="server">:)

<form id="form1" runat="server">
<local:jQuery runat="server"
    Enabled="true"
    JQueryVersion="1.2.6"
    Minified="false"
    JSDirUrl="~/js/" /> 

Note that none of the attributes listed above in local:jQuery (except for runat="server") are necessary as they're defaulted.

On a side note, if you were using Visual Studio 2008 you could use the documentation features that enable you to add a reference to another script, using "///<reference path="js/jQuery-1.2.6.js" />, which is documented here:

There's something else I wanted to go over. In a previous discussion, I mentioned that I'd like to see multiple <form>'s on a page, each one being empowered in its own right with Javascript / AJAX functionality. I mentioned to use callbacks, not postbacks. In the absence of ASP.NET AJAX extensions, this makes <form runat="server"> far less relevant to the lifecycle of an AJAX-driven application.

To be clear,

  • Postbacks are the behavior of ASP.NET to perform a form post back to the same page from which the current view derived. It processes the view state information and updates the output with a new page view accordingly.
  • Callbacks are the behavior of an AJAX application to perform a GET or a POST to a callback URL. Ideally this should be an isolated URL that performs an action rather than requests a new view. The client side would then update the view itself, depending on the response from the action. The response can be plain text, HTML, JSON, XML, or anything else.

jQuery already has functionality that helps the web developer to perform AJAX callbacks. Consider, for example, jQuery's serialize() function, which I apparently forgot about this week when I needed it (shame on me). Once I remembered it I realized this weekend that I needed to go back and implement multiple <form>'s on what I've been working on to make this function work, just like I had been telling myself all along.

But as we know,

Caveat #4: You can only have one <form runat="server"> tag on a page.

And if you recall Caveat #2 above, that means that ASP.NET controls can only be put in one form on the page, period.

It's okay, though, we're not using ASP.NET controls for postbacks nor for viewstate. We will not even use view state anymore, not in the ASP.NET Web Forms sense of the term. Session state, though? .. Maybe, assuming there is only one web server or shared session services is implemented or the load balancer is properly configured to map the unique client to the same server on each request. Failing all of these, without view state you likely have a broken site, which means that you shouldn't abandon Web Forms based programming yet. But no one in their right mind would let all three of these fail so let's not worry about that.

So I submit this ..

Tip: You can have as many <form>'s on your page as you feel like, as long as they are not nested (you cannot nest <form>'s of any kind).

Caveat #5: You cannot have client-side <form>'s on your page if you are using Master pages, as Master pages impose a <form runat="server"> context for the entirety of the page.

With the power of jQuery to manipulate the DOM, this next tip becomes feasible:

Tip: Treat <form runat="server"> solely as a staging area, by wrapping it in <div style="display:none">..</div> and using jQuery to pull out what you need for each of your client-side <form>'s.

By "a staging area", what I mean by that is that the <form runat="server"> was necessary to include the client script controls for jQuery et al, but it will also be needed if we want to include any server-generated HTML that would be easier to generate there using .ascx controls than on the client or using old-school <% %> blocks.

Let's create an example scenario. Consider the following page:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="MyMultiForm.aspx.cs" Inherits="MyMultiForm" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
    
        <div id="This_Goes_To_Action_A">
            <asp:RadioButton ID="ActionARadio" runat="server" 
                GroupName="Action" Text="Action A" /><br />
            Name: <asp:TextBox runat="server" ID="Name"></asp:TextBox><br />
            Email: <asp:TextBox runat="server" ID="Email"></asp:TextBox>
        </div>
        
        <div id="This_Goes_To_Action_B">
            <asp:RadioButton ID="ActionBRadio" runat="server" 
                GroupName="Action" Text="Action B" /><br />
            Foo: <asp:TextBox runat="server" ID="Foo"></asp:TextBox><br />
            Bar: <asp:TextBox runat="server" ID="Bar"></asp:TextBox>
        </div>
        
        <asp:Button runat="server" Text="Submit" UseSubmitBehavior="true" />
    
    </div>
    </form>
</body>
</html> 

And just to illustrate this simple scenario with a rendered output ..

Now in a postback scenario, this would be handled on the server side by determining which radio button is checked, and then taking the appropriate action (Action A or Action B) on the appropriate fields (Action A's fields or Action B's fields).

Changing this instead to client-side behavior, the whole thing is garbage and should be rewritten from scratch.

Tip: Never use server-side controls except for staging data load or unless you are depending on the ASP.NET Web Forms life cycle in some other way.

In fact, if you are 100% certain that you will never stage data on data-bound server controls, you can eliminate the <form runat="server"> altogether and go back to using <script> tags for client scripts. Doing that, however, you'll have to keep your scripts in the <body>, and for that matter you might even consider just renaming your file with a .html extension rather than a .aspx extension, but of course at that point you're not using ASP.NET anymore, so don't. ;)

I'm going to leave <form runat="server"> in place because .ascx controls, even without postbacks and view state, are just too handy and I'll illustrate this with a drop-down list later.

I can easily replace the above scenario with two old-fashioned HTML forms:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="MyMultiForm.aspx.cs" Inherits="MyMultiForm" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <%--Nothing in the form runat="server"--%>
    </form>
    <div>
        
        <div id="This_Goes_To_Action_A">
            <input type="radio" name="cbAction" checked="checked" 
                id="cbActionA" value="A" />
            <label for="cbActionA">Action A</label><br />
            <form id="ActionA" name="ActionA" action="ActionA">
                Name: <input type="text" name="Name" /><br />
                Email: <input type="text" name="Email" />
            </form>
        </div>
        <div id="This_Goes_To_Action_B">
            <input type="radio" name="cbAction" checked="checked" 
                id="cbActionB" value="B" />
            <label for="cbActionB">Action B</label><br />
            <form id="ActionB" name="ActionB" action="ActionB">
                Foo: <input type="text" name="Foo" /><br />
                Bar: <input type="text" name="Bar" />
            </form>
        </div>
        
        <button onclick="if (document.getElementById('cbActionA').checked)
                            alert('ActionA would submit.'); //document.ActionA.submit();
                         else if (document.getElementById('cbActionB').checked)
                            alert('ActionB would submit.'); //document.ActionB.submit();">Submit</button>
    
    </div>
</body>
</html> 

In some ways this got a lot cleaner, but in other ways it got a lot more complicated. First of all, I had to move the radio buttons outside of any forms as radio buttons only worked within a single form context. For that matter, there's a design problem here; it's better to put a submit button on each form than to use Javascript to determine which form to post based on a radio button--that way one doesn't have to manually check to see which radio button is checked, in fact one could drop the radio buttons altogether, and could have from the beginning even with ASP.NET postbacks; both scenarios can facilitate two submit buttons, one for each form. But I put the radio buttons in to illustrate one small example of where things inevitably get complicated on a complex page with multiple forms and multiple callback behaviors. In an AJAX-driven site, you should never (or rarely) use <input type="submit"> buttons, even if you have an onsubmit handler on your form. Instead, use plain <button>'s with onclick handlers, and control submission behavior with asynchronous XmlHttpRequest uses, and if you must leave the page for another, either use user-clickable hyperlinks (ideally to a RESTful HTML view URL) or use window.location. The window.location.reload() refreshes the page, and window.location.href=".." redirects the page. Refresh is useful if you really do want to stay on the same page but refresh your data. With no form postback, refreshing the page again or clicking the Back button and Forward button will not result in a browser dialogue box asking you if you want to resubmit the form data, which is NEVER an appropriate dialogue in an AJAX-driven site.

Another issue is that we are not taking advantage of jQuery at all and are using document.getElementById() instead.

Before we continue:

Tip: If at this point in your career path you feel more confident in ASP.NET Web Forms than in "advanced" HTML and DOM scripting, drop what you're doing and go become a master and guru of that area of web technology now.

ASP.NET Web Forms is harder to learn than HTML and DOM scripting, but I've found that ASP.NET and advanced HTML DOM can be, and often are, learned in isolation, so many competent ASP.NET developers know very little about "advanced" HTML and DOM scripting outside of the ASP.NET Web Forms methodology. But if you're trying to learn how to switch from postback-based coding to callback-based coding, we literally cannot continue until you have mastered HTML and DOM scripting. Here are some great books to read:

While you're at it, you should also grab:

Since this is also about jQuery, you need to have at least a strong working knowledge of jQuery before we continue.

The key problem with the above code, though, assuming that the commented out bit in the button's onclick event handler was used, is that the forms are still configured to redirect the entire page to post to the server, not AJAXy callback-style. What do we do?

First, bring back jQuery. We'll use the control we made earlier. (If you're using master pages, put this on the master page and forget about it so it's always there.)

..
<%@ Register src="~/Controls/jQuery.ascx" TagPrefix="local" TagName="jQuery" %>
..
<form id="form1" runat="server">
    <local:jQuery runat="server" />
</form> 

Next, to clean-up, replace all document.getElementById(..) with $("#..")[0]. This is jQuery's easier to read and write way of getting a DOM element by an ID. I know it looks odd at first but once you know jQuery and are used to it, $("#..")[0] is a very natural-looking syntax.

<button onclick="if ($('#cbActionA')[0].checked)
                    alert('ActionA would submit.'); //$('#ActionA')[0].submit();
                 else if ($('#cbActionB')[0].checked)
                    alert('ActionB would submit.'); //$('#ActionB')[0].submit();">Submit</button> 

Now we need to take a look at that submit() code and replace it.

One of the main reasons why we broke off <form runat="server"> and created two isolated forms is so that we can invoke jQuery's serialize() function to essentially create a string variable that would consist of pretty much the exact same serialization that would have been POSTed to the server if the form's default behavior executed, and the serialize() function requires the use of a dedicated form to process the conversion. The string resulting from serialize() is essentially the same as what's normally in an HTTP request body in a POST method.

Note: jQuery documentation mentions, "In order to work properly, serialize() requires that form fields have a name attribute. Having only an id will not work." But you must also give your <form> an id attribute if you intend to use $("#formid").

So now instead of invoking the appropriate form's submit() method, we should invoke a custom function that takes the form, serializes it, and POSTs it to the server, asynchronously. That was our objective in the first place, right?

So we'll add the custom function.

    <script language="javascript" type="text/javascript">
        function postFormAsync(form, fn, returnType) {
            var formFields = $(form).serialize();
            
            // set up a default POST completion routine
            if (!fn) fn = function(response) {
                alert(response);
            }; 

            $.post(
                $(form).attr("action"), // action attribute (url)
                formFields,             // data
                fn,                     // callback
                returnType              // optional
                );
        }
    </script> 

Note the fn argument, which is optional (defaults to alert the response) and which I'll not use at this time. It's the callback function, basically what to do once POST completes. In a real world scenario, you'd probably want to pass a function that redirects the user with window.location.href or else otherwise updates the contents of the page using DOM scripting. Note also the returnType; refer to jQuery's documentation for that, it's pretty straightforward. 

And finally we'll change the button code to invoke it accordingly.

<button onclick="if ($('#cbActionA')[0].checked)
                    postFormAsync($('#ActionA')[0]);
                 else if ($('#cbActionB')[0].checked)
                    postFormAsync($('#ActionB')[0]);">Submit</button> 

This works but it assumes that you have a callback URL handler waiting for you on the action="" argument of the form. For my own tests of this sample, I had to change the action="" attribute on my <form> fields to test to "ActionA.aspx" and "ActionB.aspx", these being new .aspx files in which I simply had "Action A!!" and "Action B!!" as the markup. While my .aspx files also needed to check for the form fields, the script otherwise worked fine and proved the point.

Alright, at this point some folks might still be squirming with irritation and confusion about the <form runat="server">. So now that we have jQuery performing AJAX callbacks for us, I still have yet to prove out any utility of having a <form runat="server"> in the first place, and what "staging" means in the context of the tip I stated earlier. Well, the automated insertion of jQuery and our page script at appropriate points within the page is in fact one example of "staging" that I'm referring to. But another kind of staging is data binding for initial viewing.

Let's consider the scenario where both of two forms on a single page have a long list of data-driven values.

Page:

...
<asp:DropDownList runat="server" ID="DataList1" />
<asp:DropDownList runat="server" ID="DataList2" />
... 

Code-behind / server script:

protected void Page_Load(object sender, EventArgs e)
{
    DataList1.DataSource = GetSomeData();
    DataList1.DataBind();

    DataList2.DataSource = GetSomeOtherData();
    DataList2.DataBind();
} 

Now let's assume that DataList1 will be used by the form Action A, and DataList2 will be used by the form Action B. Each will be "used by" their respective forms only in the sense that their <option> tags will be populated by the server at runtime.

Since you can only put these ASP.NET controls in a <form runat="server"> form, and you can only have one <form runat="server"> on the page, you cannot therefore simply put an <asp:DropDownList ... /> control directly into each of your forms. You'll have to come up with another way.

One-way data binding technique #1: Move the element, or contain the element and move the element's container.

You could just move the element straight over from the <form runat="server"> form to your preferred form as soon as the page loads. To do this (cleanly), you'll have to create a container <div> or <span> tag that you can predict an ID and wrap the ASP.NET control in it.

Basic example:

$("#myFormPlaceholder").append($("#myControlContainer")); 

Detailed example:

...
<div style="display: none" id="ServerForm">
    <%-- Server form is only used for staging, as shown--%>
    <form id="form1" runat="server">
        <local:jQuery runat="server" />
        <span id="DataList1_Container">
            <asp:DropDownList runat="server" ID="DataList1">
            </asp:DropDownList>
        </span>
        <span id="DataList2_Container">
            <asp:DropDownList runat="server" ID="DataList2">
            </asp:DropDownList>
        </span>
    </form>
</div>
...
<script language="javascript" type="text/javascript">
...
$().ready(function() {
    $("#DataList1_PlaceHolder").append($("#DataList1_Container"));
    $("#DataList2_PlaceHolder").append($("#DataList2_Container"));
});
</script>
<div>
    <div id="This_Goes_To_Action_A">
        ...
        <form id="ActionA" name="ActionA" action="callback/ActionA.aspx">
        ...
        DropDown1: <span id="DataList1_PlaceHolder"></span>
        </form>
    </div>
    <div id="This_Goes_To_Action_B">
        ...
        <form id="ActionB" name="ActionB" action="callback/ActionB.aspx">
        ...
        DropDown2: <span id="DataList2_PlaceHolder"></span>
        </form>
    </div>
</div> 

An alternative to referencing an ASP.NET control in its DOM context by using a container element is to register its ClientID property to script as a variable and move the server control directly. If you're using simple client <script> tags without registering them, you can use <%= control.ClientID %> syntax.

Page: 

<script language="javascript" type="text/javascript">
...
$().ready(function() {
    var DataList1 = $("#<%= DataList1.ClientID %>")[0];
    var DataList2 = $("#<%= DataList2.ClientID %>")[0];
    $("#DataList1_PlaceHolder").append($(DataList1));
    $("#DataList2_PlaceHolder").append($(DataList2));
});
</script> 

If you are using a literal and Page.ClientScript.RegisterClientScriptBlock, you won't be able to use <%= control.ClientID%> syntax, but you can instead use a pseudo-tag syntax like "{control.ClientID}", and then when calling RegisterClientScriptBlock perform a Replace() against that pseudo-tag.

Page: 

<asp:Literal runat="server" Visible="false" ID="ScriptLiteral">
<script language="javascript" type="text/javascript">
    ...
    $().ready(function() {
        var DataList1 = $("#{DataList1.ClientID}")[0];
        var DataList2 = $("#{DataList2.ClientID}")[0];
        $("#DataList1_PlaceHolder").append($(DataList1));
        $("#DataList2_PlaceHolder").append($(DataList2));
    });
</script>
</asp:Literal> 

Code-behind / server script:

protected void Page_Load(object sender, EventArgs e)
{
    ...
    Page.ClientScript.RegisterClientScriptBlock(
        typeof(MyMultiForm), "pageScript", 
        ScriptLiteral.Text
        .Replace("{DataList1.ClientID}", DataList1.ClientID)
         .Replace("{DataList2.ClientID}", DataList2.ClientID));
} 

For the sake of brevity (and as a tentative decision for usage on my own part), for the rest of this discussion I will use the second of the three, using old-fashioned <script> tags and <%= control.ControlID %> syntax to identify server control DOM elements, and then move the element directly rather than contain it.

One-way data binding technique #2: Clone the element and/or copy its contents.

You can copy the contents of the server control's data output to the place on the page where you're actively using the data. This can be useful if both of two forms, for example, each has a field that use the same data.

Page:

<script language="javascript" type="text/javascript">
function copyOptions(src, dest) {
    for (var o = 0; o < src.options.length; o++) {
        var opt = document.createElement("option");
        opt.value = src.options[o].value;
        opt.text = src.options[o].text;
        try {
            dest.add(opt, null); // standards compliant; doesn't work in IE
        }
        catch (ex) {
            dest.add(opt); // IE only
        }
    }
} 

$().ready(function() {
    var DataList1 = $("#<%= DataList1.ClientID %>")[0];
    copyOptions(DataList1, $("#ActionA_List")[0]);
    copyOptions(DataList1, $("#ActionB_List")[0]); // both use same DataList1
});
</script> 

... 

<form id="ActionA" ...>
    ...
    DropDown1: <select id="ActionA_List"></select>
</form>
<form id="ActionB" ...>
    ...
    DropDown1: <select id="ActionB_List"></select>
</form>


This introduces a sort of dynamic data binding technique whereby the importance of the form of the data being outputted by the server controls is actually getting blurry. What if, for example, the server form stopped outputting HTML and instead began outputting JSON? The revised syntax would not be much different from above, but the source data would not come from DOM elements but from data structures. That would be much more manageable from the persective of isolation of concerns and testability.

But before I get into that, what if things got even more tightly coupled instead? 

One-way data binding technique #3: Mangle the markup directly.

As others have noted, inline server markup used to be pooh pooh'd when ASP.NET came out and introduced the code-behind model. But when migrating away from Web Forms, going back to the old fashioned inline server tags and logic is like a breath of fresh air. Literally, it can allow much to be done with little effort.

Here you can see how quickly and easily one can populate a drop-down list using no code-behind conventions and using the templating engine that ASP.NET already inherently offers.

List<string>:

<select>
    <% MyList.ForEach(delegate (string s) {
            %><option><%=HttpUtility.HtmlEncode(s)%></option><%
        }); %>
</select>  

Dictionary<string, string>:

<select>
    <%  foreach (
           System.Collections.Generic.KeyValuePair<string, string> item
           in MyDictionary)
        {
            %><option value="<%= HttpUtility.HtmlEncode(item.Value) 
            %>"><%=HttpUtility.HtmlEncode(item.Key) %></option><%
        } %>
</select> 

For simple conversions of lists and dictionaries to HTML, this looks quite lightweight. Even mocking this up I am impressed. Unfortunately, in the real world data binding often tends to get more complex.

One-way data binding technique #4: Bind to raw text, JSON/Javascript, or embedded XML.

In technique #2 above (clone the element and/or copy its contents), data was bound from other HTML elements. To get the original HTML elements, the HTML had to be generated by the server. Technically, data-binding to HTML is a form of serialization. But one could also serialize the data model as data and then use script to build the destination HTML and/or DOM elements from the script data rather than from original HTML/DOM.

You could output data as raw text, such as name/value pair collections such as those formatted in a query string. Working with text requires manual parsing. It can be fine for really simple comma-delimited lists (see Javascript's String.split()), but as soon as you introduce the slightest more complex data structures such as trees you end up needing to look at alternatives.

The traditional data structure to work with anything on the Internet is XML. For good reason, too; XML is extremely versatile as a data description language. Unfortunately, XML in a browser client is extremely difficult to code for because each browser has its own implementation of XML reading/manipulating APIs, much more so than the HTML and CSS compliance differences between browsers.

If you use JSON you're working with Javascript literals. If you have a JSON library installed (I like the JsonFx Serializer because it works with ASP.NET 2.0 / C# 2.0) you can take any object that would normally be serialized and JSON-serialize it as a string on the fly. Once this string is injected to the page's Javascript, you can access the data as live Javascript objects rather than as parsed XML trees or split string arrays.

Working directly with data structures rather than generated HTML is much more flexible when you're working with a solution that is already Javascript-oriented rather than HTML-oriented. If most of the view logic is driven by Javascript, indeed it is often very nice for the script runtime to be as data-aware as possible, which is why I prefer JSON because the data structures are in Javascript's "native tongue", no translation necessary.

Once you've crossed that line, though, of moving your data away from HTML generation and into script, then a whole new door is opened where the client can receive pre-generated HTML as rendering templates only, and then make an isolated effort to take the data and then use the rendering templates to make the data available to the user. This, as opposed doing the same on the server, inevitably makes the client experience much more fluid. But at this point you can start delving into real AJAX...

One-way data binding technique #5: Scrap server controls altogether and use AJAX callbacks only.

Consider the scenario of a page that starts out as a blank canvas. It has a number of rendering templates already loaded, but there is absolutely no data on the intial page that is sent back. As soon as the page is loaded, however (think jQuery's "$(document).ready(function() { ... });) you could then have the page load the data it needs to function. This data could derive from a web service URL that is isolated from the page entirely--the same app, that is, but a different relative URL.

In an ASP.NET 2.0 implementation, this can be handled easily with jQuery, .ashx files, and something like the JsonFx JSON Serializer.

From an AJAX purist perspective, AJAX-driven data binding is by far the cleanest approach to client orientation. While it does result in the most "chatty" HTTP interaction, it can also result in the most fluid user experience and the most manageable web development paradigm, because now you've literally isolated the data tier in its entirety.

Working with data in script and synchronizing using AJAX and nothing but straight Javascript standards, the door flies wide open to easily convert one-way data binding to two-way data binding. Posting back to the server is a snap; all you need to do is update your script data objects with the HTML DOM selections and then push that data object out back over the wire, in the exact same way the data was retrieved but in reverse.

In most ways, client-side UI logic and AJAX is the panacea to versatile web UIs. The problem is that there is little consistent guidance in the industry especially for the .NET crowd. There are a lot of client-oriented architectures, few of them suited for the ASP.NET environment, and the ones that are or that are neutral are lacking in server-side orientations or else are not properly endorsed by the major players. This should not be the case, but it is. And as a result it makes compromises like ASP.NET AJAX, RoR+prototype+scriptaculous, GWT, Laszlo, and other combined AJAX client/server frameworks all look like feasible considerations, but personally I think they all stink of excess, learning curve, and code and/or runtime bloat in solution implementations.

kick it on DotNetKicks.com

Currently rated 3.1 by 8 people

  • Currently 3.125/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Web Development

Keys To Web 3.0 Design and Development When Using ASP.NET

by Jon Davis 9. October 2008 05:45

You can skip the following boring story as it's only a prelude to the meat of this post.

As I've been sitting at my job lately trying to pull off my web development ninja skillz I feel like my hands tied behind my back because I'm there temporarily as a consultant to add features, not to refactor. The current task at hand involves adding a couple additional properties to key user component in a rich web application. This requires a couple extra database columns and a bit of HTML interaction to collect the new settings. All in all, about 15 minutes, right? Slap in the columns into the database, update the SQL SELECT query, throw on a couple ASP.NET controls, add some data binding, and you're done, right? Surely not more than an hour, right?

Try three hours, just to add the columns to the database! The HTML is driven by a data "business object" that isn't a business object at all, just a data layer that has method stubs for invoking stored procedures and returns only DataTables. There are four types of "objects" based on the table being modified, and each type has its own stored procedure that ultimately proxies out to the base type's stored procedure, so that means at least five stored procedures for each CRUD operation affected by the addition. Overall, about 10 database objects were touched and as many C# data layer objects as well. Add to that a proprietary XML file that is used to map these data objects' DataTable columns, both in (parameters) and out (fields).

That's just the data. Then on the ASP.NET side, to manage event properties there's a control that's inheriting another control that is contained by another control that is contained by two other controls before it finally shows up on the page. Changes to the properties are a mix of hard-wired bindings to the lowest base control (properties) for some of the user's settings, and for most of the rest of the user's settings on the same page, CLR events (event args) are raised by the controls and are captured by the page that contains it all. There are at least five different events, one for each "section" of properties. To top it off, in my shame, I added both another "SaveXXX" event, plus I added another way of passing the data--I use a series of FindControl(..) invocation chains to get to the buried control and fetch the setting I wanted to add to the database and/or translate back out to the view. (I would have done better than to add more kludge, but I couldn't without being enticed to refactor, which I couldn't do, it's a temporary contract and the boss insisted that I not.)

To top it all off, just the simple CRUD stored procedures alone are slower than an eye blink, and seemingly showstopping in code. It takes about five seconds to handle each postback on this page, and I'm running locally (with a networked SQL Server instance).

The guys who architected all this are long gone. This wasn't the first time I've been baffled by the output of an architect who tries too hard to do the architectural deed while forgetting that his job is not only to be declarative on all layers but also to balance it with performance and making the developers' lives less complicated. In order for the team to be agile, the code must be easily adaptable.

Plus the machine I was given is, just like everyone else's, a cheap Dell with 2GB RAM and a 17" LCD monitor. (At my last job, which I quit, I had a 30-inch monitor and 4GB RAM which I replaced without permission and on my own whim with 8GB.) I frequently get OutOfMemoryExceptions from Visual Studio when trying to simply compile the code.

There are a number of reasons I can pinpoint to describe exactly why this web application has been so horrible to work with. Among them,

  • The architecture violates the KISS principle. The extremities of the data layer prove to be confounding, and buring controls inside controls (compositing) and then forking instances of them are a severe abuse of ASP.NET "flexibility".
  • OOP principles were completely ignored. Not a single data layer inherits from another. There is no business object among the "Business" objects' namespace, only data invocation stubs that wrap stored procedure execution with a transactional context, and DataTables for output. No POCO objects to represent any of the data or to reuse inherited code.
  • Tables, not stored procedures, should be used in basic CRUD operations. One should use stored procedures only in complex operations where multiple two-way queries must be accomplished to get a job done. Good for operations, bad for basic data I/O and model management.
  • Way too much emphasis on relying on Web Forms "featureset" and lifcycle (event raising, viewstate hacking, control compositing, etc.) to accomplish functionality, and way too little understanding and utilization of the basic birds and butterflies (HTML and script).
  • Way too little attention to developer productivity by failure to move the development database to the local switch, have adequate RAM, and provide adequate screen real estate to manage hundreds of database objects and hundreds of thousands of lines of code.
  • Admittance of the development manager of the sadly ignorant and costly attitude that "managers don't care about cleaning things up and refactoring, they just want to get things done and be done with it"--I say "ignorant and costly" because my billable hours were more than quadrupled versus having clean, editable code to begin with.
  • New features are not testable in isolation -- in fact, they aren't even compilable in isolation. I can compile and do lightweight testing of the data layer without more than a few heartbeats, but it takes two minutes to compile the web site just to see where my syntax or other compiler-detected errors are in my code additions (and I haven't been sleeping well lately so I'm hitting the Rebuild button and monitoring the Errors window an awful lot). 

Even as I study (ever so slowly) for MCPD certification for my own reasons while I'm at home (spare me the biased anti-Microsoft flames on that, I don't care) I'm finding that Microsoft end developers (Morts) and Microsofties (Redmondites) alike are struggling with the bulk of their own technology and are heaping up upon themselves the knowledge of their own infrastructure before fully appreciating the beauty and the simplicity of the pure basics. Fortunately, Microsoft has had enough, and they've been long and hard at the drawing board to reinvent ASP.NET with ASP.NET MVC. But my interests are not entirely, or not necessarily, MVC-related.

All I really want is for this big fat pillow to be taken off of my face, and all these multiple layers of coats and sweatshirts and mittens and ski pants and snow boots to be taken off me, so I can stomp around wearing just enough of what I need to be decent. I need to breathe, I need to move around, and I need to be able to do some ninja kung fu.

These experiences I've had with ASP.NET solutions often make me sit around brainstorming how I'd build the same solutions differently. It's always easy to be everyone's skeptic, and it requires humility to acknowledge that just because you didn't write something or it isn't in your style or flavor doesn't mean it's bad or poorly produced. Sometimes, however, it is. And most solutions built with Web Forms, actually, are.

My frustration isn't just with Web Forms. It's with corporations that build upon Internet Explorer rather than HTML+Javascript. It's with most ASP.NET web applications adopting a look-and-feel that seem to grow in a box that is controlled by Rendmondites, with few artistic deviators rocking the boat. It's with the server-driven view management rather than smart clients in script and markup. It's with nearly all development frameworks that cater towards the ASP.NET crowd being built for IIS (the server) and not for the browser (the client).

I intend to do my part, although intentions are easy, actions can be hard. But I've helped design an elaborate client-side MVC framework before, with great pride, I'm thinking about doing it again and implementing myself (I didn't have the luxury of real-world implementation [i.e. a site] last time, I only helped design it and wrote some of the core code) and open sourcing it for the ASP.NET crowd. I'm also thinking about building a certain kind of ASP.NET solution I've frequently needed to work with (CRM? CMS? Social? something else? *grin* I won't say just yet), that takes advantage of certain principles.

What principles? I need to establish these before I even begin. These have already worked their way into my head and my attitude and are already an influence in every choice I make in web architecture, and I think they're worth sharing.

1. Think dynamic HTML, not dynamically generated HTML. Think of HTML like food; do you want your fajitas sizzling when when it arrives and you have to use a fork and knife while you enjoy it fresh on your plate, or do you prefer your food preprocessed and shoved into your mouth like a dripping wet ball of finger-food sludge? As much as I love C#, and acknowledge the values of Java, PHP, Ruby on Rails, et al, the proven king and queen of the web right now, for most of the web's past, and for the indefinite future are the HTML DOM and Javascript. This has never been truer than now with jQuery, MooTools, and other (I'd rather not list them all) significant scripting libraries that have flooded the web development industry with client-side empowerment. Now with Microsoft adopting jQuery as a core asset for ASP.NET's future, there's no longer any excuse. Learn to develop the view for the client, not for the server.

Why? Because despite the fact that client-side debugging tools are less evolved than on the server (no edit-and-continue in VS, for example, and FireBug is itself buggy), the overhead of managing presentation logic in a (server) context that doesn't relate to the user's runtime is just too much to deal with sometimes. Server code often takes time to recompile, whereas scripts don't typically require compilation at all. While in theory there is plenty of control on the server to debug what's needed while you have control of it in your own predictable environment, in practice there are just too many stop-edit-retry cycles going on in server-oriented view management.

And here's why that is. The big reason to move view to the client is because developers are just writing WAY too much view, business, and data mangling logic in the same scope and context. Client-driven view management nearly forces the developer to isolate view logic from data. In ASP.NET Web Forms, your 3 tiers are database, data+view mangling on the server, and finally whatever poor and unlucky little animal (browser) has to suffer with the resulting HTML. ASP.NET MVC changes that to essentially five tiers: the database, the models, the controller, the server-side view template,and finally whatever poor and unlucky little animal has to suffer with the resulting HTML. (Okay, Microsoft might be changing that with adopting jQuery and promising a client solution, we'll see.)

Most importantly, client-driven views make for a much richer, more interactive UIX (User Interface/eXperience); you can, for example reveal/hide or enable/disable a set of sub-questions depending on if the user checks a checkbox, with instant gratification. The ASP.NET Web Forms model would have it automatically perform a form post to refresh the page with the area enabled/disabled/revealed/hidden depending on the checked state. The difference is profound--a millisecond or two versus an entire second or two.

2. Abandon ASP.NET Web Forms. RoR implements a good model, try gleaning from that. ASP.NET MVC might be the way of the future. But frankly, most of the insanely popular web solutions on the Internet are PHP-driven these days, and I'm betting that's because PHP is on a similar coding model as ASP classic. No MVC stubs. No code-behinds. All that stuff can be tailored into a site as a matter of discipline (one of the reasons why PHP added OOP), but you're not forced into a one-size-fits-all paradigm, you just write your HTML templates and go.

Why? Web Forms is a bear. Its only two advantages are the ability to drag-and-drop functionality onto a page and watch it go, and premier vender (Microsoft / Visual Studio / MSDN) support. But it's difficult to optimize, difficult to templatize, difficult to abstract away from business logic layers (if at least difficult in that it requires intentional discipline), and puts way too much emphasis on the lifecycle of the page hit and postback. Look around at the ASP.NET web forms solutions out there. Web Forms is crusty like Visual Basic is crusty. It was created for, and is mostly used for, corporate grunts who use B2B (business-to-business) or internal apps. The rest of the web sites who use ASP.NET Web Forms suffer greatly from the painful code bloat of the ASP.NET Web Forms coding model and the horrible end-user costs of page bloat and round-trip navigation.

Kudos to Guthrie, et al, who developed Web Forms, it is a neat technology, but it is absolutely NOT a one-size-fits-all platform any more than my winter coat from Minnesota is. So congratulations to Microsoft for picking up the ball and working on ASP.NET MVC.

3. Use callbacks, not postbacks. Sometimes a single little control, like a textbox that behaves like an auto-suggest combobox, just needs a dedicated URL to perform an AJAX query against. But also, in ASP.NET space, I envision the return of multiple <form>'s, with DHTML-based page MVC controllers powering them all, driving them through AJAX/XmlHttpRequest.

Why? Clients can be smart now. They should do the view processing, not the server. The browser standard has finally arrived to such a place that most people have browsers capable of true DOM/DHTML and Javascript with JSON and XmlHttpRequest support.

Clearing and redrawing the screen is as bad as 1980s BBS ANSI screen redraws. It's obsolete. We don't need to write apps that way. Postbacks are cheap; don't be cheap. Be agile; use patterns, practices, and techniques that save development time and energy while avoiding the loss of a fluid user experience. <form action="someplace" /> should *always* have an onsubmit handler that returns false but runs an AJAX-driven post. The page should *optionally* redirect, but more likely only the area of the form or a region of the page (a containing DIV perhaps) should be replaced with the results of the post. Retain your header and sidebar in the user experience, and don't even let the content area go white for a split second. Buffer the HTML and display it when ready.

ASP.NET AJAX has region refreshes already, but still supports only <form runat="server" /> (limit 1), and the code-behind model of ASP.NET AJAX remains the same. Without discipline of changing from postback to callback behavior, it is difficult to isolate page posts from componentized view behavior. Further, <form runat="server" /> should be considered deprecated and obsolete. Theoretically, if you *must* have ViewState information you can drive it all with Javascript and client-side controllers assigned to each form.

ASP.NET MVC can manage callbacks uniformly by defining a REST URL suffix, prefix, or querystring, and then assigning a JSON handler view to that URL, for example ~/employee/profile/jsmith?view=json might return the Javascript object that represents employee Joe Smith's profile. You can then use Javascript to pump HTML generated at the client into view based on the results of the AJAX request.

4. By default, allow users to log in without accessing a log in page. A slight tangent (or so it would seem), this is a UI design constraint, something that has been a pet peeve of mine ever since I realized that it's totally unnecessary to have a login page. If you don't want to put ugly Username/Password fields on the header or sidebar, use AJAX.

Why? Because if a user visits your site and sees something interesting and clicks on a link, but membership is required, the entire user experience is inturrupted by the disruption of a login screen. Instead, fade out to 60%, show a DHTML pop-up login, and fade in and continue forward. The user never leaves the page before seeing the link or functionality being accessed.

Imagine if Microsoft Windows' UAC, OS X's keyring, or GNOME's sudo auth, did a total clear-screen and ignored your action whenever it needed an Administrator password. Thankfully it doesn't work that way; the flow is paused with a small dialogue box, not flat out inturrupted.

5. Abandon the Internet Explorer "standard". This goes to corporate folks who target IE. I am not saying this as an anti-IE bigot. In fact, I'm saying this in Internet Explorer's favor. Internet Explorer 8 (currently not yet released, still in beta) introduces better web standards support than previous versions of Internet Explorer, and it's not nearly as far behind the trail of Firefox and WebKit (Safari, Chrome) as Internet Explorer 7 is. With this reality, web developers can finally and safely build W3C-compliant web applications without worrying too much about which browser vendor the user is using, and instead ask the user to get the latest version

Why? Supporting multiple different browsers typically means writing more than one version of a view. This means developer productivity is lost. That means that features get stripped out due to time constraints. That means that your web site is crappier. That means users will be upset because they're not getting as much of what they want. That means less users will come. And that means less money. So take on the "Write once, run anywhere" mantra (which was once Java's slogan back in the mid-90s) by writing W3C-compliant code, and leave behind only those users who refuse to update their favorite browsers, and you'll get a lot more done while reaching a broader market, if not now then very soon, such as perhaps 1/2 yr after IE 8 is released. Use Javascript libraries like jQuery to handle most of the browser differences that are left over, while at the same time being empowered to add a lot of UI functionality without postbacks. (Did I mention that postbacks are evil?)

6. When hiring, favor HTML+CSS+Javascript gurus who have talent and an eye for good UIX (User Interface/eXperience) over ASP.NET+database gurus. Yeah! I just said that!

Why? Because the web runs on the web! Surprisingly, most employers don't have any idea and have this all upside down. They favor database gurus as gods and look down upon UIX developers as children. But the fact is I've seen more ASP.NET+SQL guys who halfway know that stuff and know little of HTML+Javascript than I have seen AJAX pros, and honestly pretty much every AJAX pro is bright enough and smart enough to get down and dirty with BLL and SQL when the time comes. Personally, I can see why HTML+CSS+Javascript roles are paid less (sometimes a lot less) than the server-oriented developers--any script kiddie can learn HTML!--but when it comes to professional web development they are ignored WAY too much because of only that. The web's top sites require extremely brilliant front-end expertise, including Facebook, Hotmail, Gmail, Flickr, YouTube, MSNBC--even Amazon.com which most prominently features server-generated content but yet also reveals a significant amount of client-side expertise.

I've blogged it before and I'll mention it again, the one, first, and most recent time I ever had to personally fire a co-worker (due to my boss being out of town and my having authority, and my boss requesting it of me over the phone) was when I was working with an "imported" contractor who had a master's degree and full Microsoft certification, but could not copy two simple hyperlinks with revised URLs within less than 5-10 minutes while I watched. The whole office was in a gossipping frenzy, "What? Couldn't create a hyperlink? Who doesn't know HTML?! How could anyone not know HTML?!", but I realized that the core fundamentals have been taken for granted by us as technologists to such an extent that we've forgotten how important it is to value it in our hiring processes.

7.  ADO.NET direct SQL code or ORM. Pick one. Don't just use data layers. Learn OOP fundamentals. The ActiveRecord pattern is nice. Alternatively, if it's a really lightweight web solution, just go back to wring plain Jane SQL with ADO.NET. If you're using C# 3.0, which of course you are in the context of this blog entry, then use LINQ-to-SQL or LINQ-to-Entities. On the ORM side, however, I'm losing favor with some of them because they often cater to a particular crowd. I'm slow to say "enterprise" because, frankly, too many people assume the word "enterprise" for their solutions when they are anything but. Even web sites running at tens of thousands of hits a day and generating hundreds of thousands of dollars of revenue every month don't necessarily mean "enterprise". The term "enterprise" is more of a people management inference than a stability or quality effort. It's about getting many people on your team using the same patterns and not having loose and abrupt access to thrash the database. For that matter, the corporate slacks-and-tie crowd of ASP.NET "Morts" often can relate to "enterprise", and not even realize it. But for a very small team (10 or less) and especially for a micro ISV (developers numbering 5 or less) with a casual and agile attitude, take the word "enterprise" with a grain of salt. You don't need a gajillion layers of red tape. For that matter, though, smaller teams are usually small because of tighter budgets, and that usually means tighter deadlines, and that means developer productivity must reign right there alongside stability and performance. So find an ORM solution that emphasizes productivity (minimal maintenance and easily adaptable) and don't you dare trade routine refactoring for task-oriented focus as you'll end up just wasting everyone's time in the long run. Always include refactoring to simplicity in your maintenance schedule.

Why? Why go raw with ADO.NET direct SQL or choose an ORM? Because some people take the data layer WAY too far. Focus on what matters; take the effort to avoid the effort of fussing with the data tier. Data management is less important than most teams seem to think. The developer's focus should be on the UIX (User Interface/eXperience) and the application functionality, not how to store the data. There are three areas where the typical emphasis on data management is agreeably important: stability, performance (both of which are why we choose SQL Server over, oh, I dunno, XML files?) and queryability. The latter is important both for the application and for decision makers. But a fourth requirement is routinely overlooked, and that is the emphasis on being able to establish a lightweight developer workflow of working with data so that you can create features quickly and adapt existing code easily. Again, this is why a proper understanding of OOP, how to apply it, when to use it, etc, is emphasized all the time, by yours truly. Learn the value of abstraction and inheritence and of encapsulating interfaces (resulting in polymorphism). Your business objects should not be much more than POCO objects with application-realized properties. Adding a new simple data-persisted object, or modifying an existing one with, say, a new column, should not take more than a minute of one's time. Spend the rest of that time instead on how best to impress the user with a snappy, responsive user interface.

8. Callback-driven content should derive equally easily from your server, your partner's site, or some strange web service all the way in la-la land. We're aspiring for Web 3.0 now, but what happened to Web 2.0? We're building on top of it! Web 2.0 brought us mashups, single sign-ons, and cross-site social networking. FaceBook Applications are a classic demonstration of an excelling student of Web 2.0 now graduating and turning into a Web 3.0 student. Problem is, keeping the momentum going, who's driving this rig? If it's not you, you're missing out on the 3.0 vision.

Why? Because now you can. Hopefully by now you've already shifted the bulk of the view logic over to the client. And you've empowered your developers to focus on the front-end UIX. Now, though, the client view is empowered to do more. It still has to derive content from you, but in a callback-driven architecture, the content is URL-defined. As long as security implications are resolved, you now have the entire web at your [visitors'] disposal! Now turn it around to yourself and make your site benefit from it!

If you're already invoking web services, get that stuff off your servers! Web services queried from the server cost bandwidth and add significant time overhead before the page is released from the buffer to the client. The whole time you're fetching the results of a web service you're querying, the client is sitting there looking at a busy animation or a blank screen. Don't let that happen! Throw the client a bone and let it fetch the external resources on its own.

9. Pay attention to the UIX design styles of the non-ASP.NET Web 2.0/3.0 communities. There is such a thing as a "Web 2.0 look", whether we like to admit it or not; we web developers evolved and came up with innovations worth standardizing on, why can't designers evolve and come up with visual innovations worth standardizing on? If the end user's happiness is our goal, how are features and stable and performant code more important than aesthetics and ease of use? The problem is, one perspective of what "the Web 2.0 look" actually looks like is likely very different from another's or my own. I'm not speaking of heavy gloss or diagonal lines. I most certainly am not talking about the "bubble gum" look. (I jokingly mutter "Let's redesign that with diagonal lines and beveled corners!" now and then, but when I said that to my previous boss and co-worker, both of whom already looked down on me WAY more than they deserved to do, neither of them understood that I was joking. Or, at least, they didn't laugh or even smile.) No, but I am talking about the use of artistic elements, font choices and font styles, and layout characteristics that make a web site stand out from the crowd as being highly usable and engaging. 

Let's demonstrate, shall we? Here are some sites and solutions that deserve some praise. None of them are ASP.NET-oriented.

  • http://www.javascriptmvc.com/ (ugly colors but otherwise nice layout and "flow"; all functionality driven by Javascript; be sure to click on the "tabs")
  • http://www.deskaway.com/ (ignore the ugly logo but otherwise take in the beauty of the design and workflow; elegant font choice)
  • http://www.mosso.com/ (I really admire the visual layout of this JavaServer Pages driven site; fortunately I love the fact that they support ASP.NET on their product)
  • http://www.feedburner.com/ (these guys did a redesign not too terribly long ago; I really admire their selective use of background patterns, large-font textboxes, hover effects, and overall aesthetic flow)
  • http://www.phpbb.com/ (stunning layout, rock solid functionality, universal acceptance)
  • http://www.joomla.org/ (a beautiful and powerful open source CMS)
  • http://goplan.org/ (I don't like the color scheme but I do like the sheer simplicity
  • .. for that matter I also love the design and simplicity of http://www.curdbee.com/)

Now here are some ASP.NET-oriented sites. They are some of the most popular ASP.NET-driven sites and solutions, but their design characteristics, frankly, feel like the late 90s.

  • http://www.dotnetnuke.com/ (one of the most popular CMS/portal options in the open source ASP.NET community .. and, frankly, I hate it)
  • http://www.officelive.com/ (sign in and discover a lot of features with a "smart client" feel, but somehow it looks and feels slow, kludgy, and unrefined; I think it's because Microsoft doesn't get out much)
  • http://communityserver.com/ (it looks like a step in the right direction, but there's an awful lot of smoke and mirrors; follow the Community link and you'll see the best of what the ASP.NET community has to offer in the way of forums .. which frankly doesn't impress me as much as phpBB)
  • http://www.dotnetblogengine.net/ (my blog uses this, I like it well enough, but it's just one niche, and that's straight-and-simple blogs
  • http://subsonicproject.com/ (the ORM technology is very nice, but the site design is only "not bad", and the web site starter kit leave me shrugging with a shiver)

Let's face it, the ASP.NET community is not driven by designers.

Why? Why do I ramble on about such fluffy things? Because at my current job (see the intro text) the site design is a dump of one feature hastilly slapped on after another, and although the web app has a lot of features and plenty of AJAX to empower it here and there, it is, for the most part, an ugly and disgusting piece of cow dung in the area of UIX (User Interface/eXperience). AJAX functionality is based on third party components that "magically just work" while gobs and gobs of gobblygook code on the back end attempts to wire everything together, and what AJAX is there is both rare and slow, encumbered by page bloat and server bloat. The front-end appearance is amateurish, and I'm disheartened as a web developer to work with it.

Such seems to be the makeup of way too many ASP.NET solutions that I've seen.

10. Componentize the client. Use "controls" on the client in the same way you might use .ASCX controls on the server, and in the process of doing this, implement a lifecycle and communications subsystem on the client. This is what I want to do, and again I'm thinking of coming up with a framework to pursue it to compliment Microsoft's and others' efforts. If someone else (i.e. Microsoft) beats me to it, fine. I just hope theirs is better than mine.

Why? Well if you're going to emphasize the client, you need to be able to have a manageable development workflow.

ASP.NET thrives on the workflows of quick-tagging (<asp:XxxXxx runat="server" />) and drag-and-drop, and that's all part of the equation of what makes it so popular. But that's not all ASP.NET is good for. ASP.NET's greatest strengths are two: IIS and the CLR (namely the C# language). The quality of integration of C# with IIS is incredible. You get near-native-compiled-quality code with scripted text file ease of deployment, and the deployment is native to the server (no proxying, a la Apache->Tomcat->Java, or even FastCGI->PHP). So why not utilize these other benefits as a Javascript-based view seed rather than as generating the entirety of the view.

On the competitive front, take a look at http://www.wavemaker.com/. Talk about drag-and-drop coding for smart client-side applications, driven by a rich server back-end (Java). This is some serious competition indeed.

11. RESTful URIs, not postback or Javascript inline resets of entire pages. Too many developers of AJAX-driven smart client web apps are bragging about how the user never leaves the page. This is actually not ideal.

Why? Every time the primary section of content changes, in my opinion, it should have a URI, and that should be reflected (somehow) in the browser's Address field. Even if it's going to be impossible to make the URL SEO-friendly (because there are no predictable hyperlinks that are spiderable), the user should be able to return to the same view later, without stepping through a number of steps of logging in and clicking around. This is partly the very definition of the World Wide Web: All around the world, content is reflected with a URL.

12. Glean from the others. Learn CakePHP. Build a simple symfony or Code Igniter site. Watch the Ruby On Rails screencasts and consider diving in. And have you seen Jaxer lately?!

And absolutely, without hesitation, learn jQuery, which Microsoft will be supporting from here on out in Visual Studio and ASP.NET. Discover the plug-ins and try to figure out how you can leverage them in an ASP.NET environment.

Why? Because you've lived in a box for too long. You need to get out and smell the fresh air. Look at the people as they pass you by. You are a free human being. Dare yourself to think outside the box. Innovate. Did you know that most innovations are gleaning from other people's imaginative ideas and implemenations, and reapplying them in your own world, using your own tools? Why should Ruby on Rails have a coding workflow that's better than ASP.NET? Why should PHP be a significantly more popular platform on the public web than ASP.NET, what makes it so special besides being completely free of Redmondite ties? Can you interoperate with it? Have you tried? How can the innovations of Jaxer be applied to the IIS 7 and ASP.NET scenario, what can you do to see something as earth-shattering inside this Mortian realm? How can you leverage jQuery to make your web site do things you wouldn't have dreamed of trying to do otherwise? Or at least, how can you apply it to make your web application more responsive and interactive than the typical junk you've been pumping out?

You can be a much more productive developer. The whole world is at your fingertips, you only need to pay attention to it and learn how to leverage it to your advantage.

 

And these things, I believe, are what is going to drive the Web 1.0 Morts in the direction of Web 3.0, building on the hard work of yesteryear's progress and making the most of the most powerful, flexible, stable, and comprehensive server and web development technology currently in existence--ASP.NET and Visual Studio--by breaking out of their molds and entering into the new frontier.

kick it on DotNetKicks.com

Currently rated 3.0 by 4 people

  • Currently 3/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Opinion | Web Development | ASP.NET

Lockup by AJAX is unacceptable

by Jon Davis 13. April 2008 02:59

Brower Vendors: Please Add This To Your Unit Tests

http://www.jondavis.net/codeprojects/synctest/  

Currently rated 1.0 by 2 people

  • Currently 1/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Javascript: Introducing Using (.js)

by Jon Davis 12. April 2008 22:37

UPDATE: This is now managed on GitHub and this blog article is now obsolete. See you on GitHub!

https://github.com/stimpy77/using.js/


I'm releasing v1.0 of using.js which introduces a new way of declaring dependency scripts in Javscript.

http://www.jondavis.net/codeprojects/using.js/

http://github.com/stimpy77/using.js/

The goals of using.js are to:

  • Seperate script dependencies from HTML markup (let the script framework figure out the dependencies it needs, not the designer).
  • Make script referencing as simple and easy as possible (no need to manage the HTML files)
  • Lazy load the scripts and not load them until and unless they are actually needed at runtime  

The way it works is simple. Add a <script src="using.js"> reference to the <head> tag:

<html>
  <head>
    <script type="text/javascript" language="javascript" src="using.js"></script>
    <script type="text/javascript" language="javascript">
      // your script here
    </script>
  </head>
  <body> .. </body>
</html>
 

Then in your script, register your potential dependencies. (These will not get loaded until they get used!) 

using.register("jquery", "/scripts/jquery-1.2.3.js"); 

Finally, when you need to begin invoking some functionality that requires your dependency invoke using():

using("jquery"); // loads jQuery and de-registers jQuery from using
$("a").css("text-decoration", "none");

using("jquery"); // redundant calls to using() won't repeat fetch of jQuery because jquery was de-registered from using
$("a").css("color", "green");

Note that this is only synchronous if the global value of using.wait is 0 (the default). You can reference scripts on external domains if you precede the URL in the using.register() statement with true and/or with an integer milliseconds value, or if you set the global using.wait to something like 500 or 1000, but then you must write your dependency usage scripts with a callback. (UPDATE: v1.0.1: Simply providing a callback will also make the load asynchronous.) No problem, here's how it's done:

using.register("jquery", true, "http://cachefile.net/scripts/jquery-1.2.3.js");
using("jquery", function() {
  $("a").css("text-decoration", "none"); //async callback
});

Oh, and by the way, using.register() supports multiple dependency script URLs.

using.register('multi', // 'multi' is the name
    '/scripts/dep1.js', // dep1.js is the first dependency
    '/scripts/dep2.js'  // dep2.js is the secon dependency
  );

UPDATE: I just mostly rewrote using.js. Now with v1.1 you can now add subdependencies, like so:

using.register('jquery-blockUI', true,
  'http://cachefile.net/scripts/jquery/plugins/blockUI/2.02/jquery.blockUI.js'
).requires('jquery');

Basically what the new .requires() functionality will do is when you invoke using('jquery-blockUI'); it will also load up jquery first.

UPDATE 2: With v1.2 I've added several new additional touches. Now you don't *have* to declare your subdependencies with using.register(), you can just say:

using('jquery', 'jquery-blockUI', function() {
  $.blockUI();
});

This assumes that jQuery and blockUI have both been registered, the latter without the .requires('jquery') invocation.

That said, though, you don't even have to call .register anymore if you don't want to:

using('url(http://cachefile.net/scripts/jquery/1.2.3/jquery-1.2.3.js)', function() {
  alert($.fn.jquery);
});

There are also two new features that *should* work but I haven't written tests yet:

  1. using.register([json object]); // see using.prototype.Registration
    • object members, and the arguments for the compatible using.prototype.Registration prototype function, are both:
      1. name (string)
      2. version (string, format "1.2.3")
      3. remote (boolean, true if external domain; invoke requires callback)
      4. asyncWait (integer, milliseconds for imposed async; invoke requires callback)
      5. urls (string array)
  2. Registration chaining:
    • using
        .register("myScript", "/myscript.js")
        .register("myOtherScript", "/myotherscript.js").requires('myScript')
        .register("bob's script", "/bob.js");

UPDATE 3: v1.3 fixes the using('url(..')) functionality so that a script loaded this way is remembered so that is not fetched again if the same URL is referenced in the same way again. This is the reverse of the using.register() behavior, where if a script is loaded its registration is "forgotten". Also made sure that multiple script URLs listed in using('url(..)', 'url(..)'), function(){}); is supported correctly.

If for some strange reason you want the script at the same URL to be re-fetched, try this unsupported hack that might not be available tomorrow:

using.__durls['http://my/url.js'] = undefined;

UPDATE 3.1: V1.3.1 should hopefully fix the "not enough arguments" error that some Firefox users have been having. I was never able to reproduce this, but I did discover after doing some research that Firefox supposedly expects null to be passed into xhr.send(). I guess some systems suffered from this while I didn't. At any rate, I'm passing null now.

UPDATE 3/29/2009:

It is very unfortunate, guys, that the script loader in using.js doesn't really work as designed across all major browsers anymore. The demos/tests on the using.js page have erratic results depending on the browser--they all work fine in current Internet Explorer, but half the tests fail on Safari now, and FF has inconsistent results especially with Firebug installed (actually it's not that bad, Safari 4 beta only fails the "retain context" test which is a minor issue, and FF fails about two tests)--but most of this Firefox's failures was not the case when using.js was implemented. It seems as though the browser vendors saw what using.js was taking advantage of as an exploit and started disabling these features.

Pretty soon I'm hopefully going to start looking at all the incompatibilities and failure points that have arisen over the last year to make using.js more capable. In the past I always took pride in building in standalone isolation from jQuery, but I'm using jQuery everywhere now, and jQuery has its own script loader, which apparently works or else it wouldn't be there (haven't tried to use it). That said, though, a port of using.js to jQuery's loader might make sense; the syntactical sugar and programming-think of using.js goes beyond just late script loading, it's more about dependency-checking and load history, and that part being just pure Javascript is NOT broken in the browsers.

UPDATE 3/9/2010:

All of the modern browsers (Chrome, Webkit/Safari, Opera 10.5, IE8) except FF 3.6 now pass all the tests! I figured out what was wrong with Webkit and Opera not handling the "retain context" test properly. It turns out that window.eval() and eval() are not one and the same. The test now invokes eval() instead of window.eval(), and passes.

FF 3.6 still fails two tests: The "no callback" test (XHR is not behaving) and the multiple dependencies test; I'll look into it and follow up with Mozilla.

UPDATE 9/19/2012:

Using.js is now on GitHub! Thanks for all your comments, thumbs-up, and support!

If you have bug reports or suggestions, please post comments here or e-mail me at jon@jondavis.net.

kick it on DotNetKicks.com

Currently rated 4.5 by 22 people

  • Currently 4.454545/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Software Development | Web Development

DynarchLIB AJAX Toolkit Looks Interes... Whoa!

by Jon Davis 3. April 2008 16:48

Another complete suite of a Javascript framework has been released.

http://www.dynarchlib.com/

All I can say is, look! Given our toolsets, I think RIA via Javascript is becoming commonplace. I'm not sure I like the default themes, and the SDK download throws Javascript errors, but the demos are pretty exciting.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Javascript: Cross-Browser EventArgs

by Jon Davis 1. April 2008 22:53

So we came across this need for a consistent Event object. Here, world!

  • eventArgs
    • button (int)
    • buttonLeft (bool)
    • buttonRight (bool)
    • buttonMiddle (bool)
    • domEvent (window.event)
    • keyCode (int)
    • charCode (int)  << not sure about this one :(
    • char                << or this :( 
    • offsetX (int)
    • offsetY (int)
    • relatedElement (DOM Element)
    • srcElement (DOM Element)

var EventArgs = function(e, _domElement) {
    var msie = window.navigator.userAgent.toLowerCase().indexOf('msie') > -1;
    if (!e) e = window.event; //msie
    this.domEvent = e;
    this.srcElement = _domElement;
    if (!this.domElement) {
        if (e.srcElement) this.srcElement = e.srcElement;
        else if (e.target) this.srcElement = e.target;
    }
    
    this.button = e.button;
    // use msie's button map as it has more data
    if (!msie &&
        (e.type == "mousedown" ||
         e.type == "mouseup" ||
         e.type == "mousemove")) {
         switch (this.button) {
            case 0:
                this.button = 1; // left
                break;
            case 1:
                this.button = 4; // middle
                break;
            case 2:
                this.button = 2; // right
                break;
         }
    }

    var LEFT = 1;
    var RIGHT = 2;
    var MIDDLE = 4;
    this.buttonLeft = (this.button & LEFT) == LEFT;
    this.buttonRight = (this.button & RIGHT) == RIGHT;
    this.buttonMiddle = (this.button & MIDDLE) == MIDDLE;
   
    this.offsetX = e.offsetX;
    this.offsetY = e.offsetY;
    if (!this.offsetX && !msie && e.layerX) {
        this.offsetX = e.layerX;
        this.offsetY = e.layerY;
    }
    this.relatedElement = e.relatedTarget;
    if (e.type == "mouseover" && !this.relatedElement && msie && e.fromElement) {
        this.relatedElement = e.fromElement;
    }
    if (e.type == "mouseout" && !this.relatedElement && msie && e.toElement) {
        this.relatedElement = e.toElement;
    }
    this.keyCode = e.keyCode;
    this.charCode = e.charCode;
    if (!this.charCode) {
        if (e.shiftKey == false) {
            this.charCode = this.keyCode + 32; // hmmph .. todo: replace this

        } else {
            this.charCode = this.keyCode;  // yuck
        }
    }
    this.char = String.fromCharCode(this.charCode ? this.charCode : this.keyCode);
};

This is a lil wordy but more or less self-documenting.

So where you would normally pass event, instead pass new EventArgs(event). This abstracts away the browser differences based on the model described in the bulleted list at the top of this blog post. :)

<a href="#" onmouseup="doSomething(new EventArgs(event))">click me</a>  

kick it on DotNetKicks.com

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

MVC On The Client In Javascript

by Jon Davis 1. April 2008 04:30

I stumbled across this over the weekend.

http://javascriptmvc.com/

I was actually very surprised by how closely it resembles what we've been working on at the office. Ours uses a controller to manage and control events and event propogation, track "view objects" (we call 'em "client controls" for drag-and-drop support in Visual Web Developer) and manage AJAX calls. And we've spec'd out to use RESTful URIs to manage data model retrieval and callbacks, and these are cacheable using Google Gears, Flash storage, or *shrug* cookies.

Theirs has a few additional features, though, some of which I think we can glean from, like:

  • script librarian ("Include"), which we don't need but I think we could accomplish using something like JSLoader
  • a complete ActiveRecord-like modeling pattern
  • a complete ASP-like templating system that executes on the client
  • "everything is a plug-in" philosophy

I like what I see, although our own framework goes further as it is built with ASP.NET, ASP.NET MVC, Visual Studio, and Expression Web all in mind. With ours, we enable our web designer, who is not an engineer, to create a complete, non-Flash RIA web pages without coding. Using Expression Web or Visual Web Developer, he can click on one of our controls in the Toolbox, drag it out to the page, absolutely position it, stylize it, give it a data source URI, and have it subscribe to other controls' events (think Flash video player, responding to the events of media playback controls). The entire multi-page web site will support executing in the rich execution environment of a single-page RIA application with a seamless user experience. And since the framework is not done in Flash (although Flash "client controls" are supported), it will support continuous extensions using the wonderfully universal languages of HTML and Javascript, both at design-time (creating new controls, customizing existing controls) and at runtime (RESTful fetches of web content, dynamic execution of JSON models, etc).

In some ways, ours is looking like http://www.wavemaker.com/, except that WaveMaker is based on Java and dojo, and the designer experience is in-page (which is way too much support overhead--why reinvent the designer when Visual Studio / Expression Web can do the job on its own?).

But I'd certainly recommend Javascript MVC (JavascriptMVC.com) as a skeleton foundation framework for someone to roll their own framework. We were thinking about open-sourcing our client bits once we are done with our prototype, but I think Javascript MVC comes close enough that it would do just as well to recommend that one instead. Mind you, I have never used it, I'm only suggesting it based on what I'm seeing at their web site.

kick it on DotNetKicks.com

WebToolkit.info Scripts Wrapped

by Jon Davis 30. March 2008 02:59

A buddy and I were poking around at the sample scripts at http://tide4javascript.com when my buddy noticed a crc32 implementation. I followed the trail and found a number of interesting utility scripts at webtoolkit.info.

I thought they were pretty worthy so I wrapped them up and packaged them as a utility library and posted it here:

http://cachefile.net/scripts/webtoolkit.info/ 

I also added a test page, which also makes for a decent quick and dirty demo page.

http://cachefile.net/scripts/webtoolkit.info/2008.03.30/test.html 

UPDATE: I stumbled upon a blog post called "Top Ten Javascript Functions of All Time" (http://www.dustindiaz.com/top-ten-javascript/). I decided to append these functions. I quickly deprecated the webtoolkit.info URL and made it just "webtoolkit".

Currently rated 1.5 by 2 people

  • Currently 1.5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Web Development

Oliver Steele, Javascript, and Data Models

by Jon Davis 30. March 2008 02:48

Ugh, just when I was getting all smug about our progres with CRUD operations and how we've designed a rediculously simplistic yet powerful approach to our new Javascript framework at the office, my co-worker finds this and asks, "Hey, why aren't we doing it this way?"

http://osteele.com/archives/2008/02/synchronizing-client-models

And then I realize we're just not doing enough. Good blog post, though.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Web Development

Highlighting a TR (HTML Table Row) With A Border

by Jon Davis 29. March 2008 15:44

Here's another workaround to fix another elementary problem in Internet Explorer. Again, this isn't anything new, but if anyone is coming across this blog looking for an answer to this problem, here's the solution.

(And by the way, yeah, this is really elementary. I should be focusing on real problems.)

Someone in a local technology mailing list asked for help on how one highlights a table row in Javascript. He tried the following function, but it did not work for him.

function OutlineTableRow(RowID,BColor,BWidth,BStyle)
{
 var TableRow = document.getElementById(RowID);
 if(TableRow)
 {
  TableRow.style.borderColor = BColor;
  TableRow.style.borderStyle = BStyle;
  TableRow.style.borderWidth = BWidth;
 }
}

So how to you border-highlight a row in HTML? Internet Explorer doesn't support CSS on the TR like it should. You have to do it on the cells themselves. You also have to be careful not to divide the cells with borders; the leftmost and rightmost cells should be the only cells to get left or right borders, respectively. Finally, you must also set the border-collapse CSS property on the table to "collapse", otherwise the border itself will have seperation points on the inner edges of each cell.

Here's my workaround in Javascript, feel free to copy:

<html>
    <body>
        <table>
            <tr>
                <td>1</td>
                <td>2</td>
                <td>3</td>
            </tr>
            <tr id="aa">
                <td>1</td>
                <td>2</td>
                <td>3</td>
            </tr>
            <tr>
                <td>1</td>
                <td>2</td>
                <td>3</td>
           </tr>
        </table>
        <script type="text/javascript" language="javascript" >
            function outlineTableRow(rowId, borderColor, borderWidth, borderStyle){
                var tableRow = document.getElementById(rowId);
                if (tableRow) {
                    var table = tableRow.parentNode;
                    while (table.tagName.toLowerCase() != "table") {
                        table = table.parentNode;
                    }
                    table.style.borderCollapse = "collapse";
                    var tableCells = tableRow.getElementsByTagName('td');
                    if (tableCells.length > 0) {
                   
                        for (i = 0; i < tableCells.length; i++) {
                            if (i == 0) {
                                tableCells[i].style.borderLeftColor = borderColor;
                                tableCells[i].style.borderLeftStyle = borderStyle;
                                tableCells[i].style.borderLeftWidth = borderWidth;
                            }
                            else
                                if (i == tableCells.length - 1) {
                                    tableCells[i].style.borderRightColor = borderColor;
                                    tableCells[i].style.borderRightStyle = borderStyle;
                                    tableCells[i].style.borderRightWidth = borderWidth;
                                }
                            tableCells[i].style.borderTopColor = borderColor;
                            tableCells[i].style.borderTopStyle = borderStyle;
                            tableCells[i].style.borderTopWidth = borderWidth;
                            tableCells[i].style.borderBottomColor = borderColor;
                            tableCells[i].style.borderBottomStyle = borderStyle;
                            tableCells[i].style.borderBottomWidth = borderWidth;
                           
                        }
                    }
                }
            }
           
            window.onload = function(){
                outlineTableRow('aa', '#f00', '2px', 'outset');
            }
        </script>
    </body>
</html>

Result:
1 2 3
1 2 3
1 2 3

But one should use CSS for this. Rather than explicitly setting [element].style.[cssproperty], instead one should set the className property, then define the details in CSS. If you really want to pass arbitrary styles to a function, jQuery would also be essential for doing this. Come to think of it, jQuery would be essential, regardless.

Currently rated 3.0 by 1 people

  • Currently 3/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Web Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  March 2021  >>
MoTuWeThFrSaSu
22232425262728
1234567
891011121314
15161718192021
22232425262728
2930311234

View posts in large calendar